-
Notifications
You must be signed in to change notification settings - Fork 471
Add structured data in code evaluation logs #3077
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
The rationale is that one would want audit log only for people "manually" executing code in an app server. If the code being evaluated is from a deployed app, I'm assuming they reviewed the code before. And if they want to log something there, they can use a logger inside their app's source code.
This will be easier to process from an external log aggregator. Useful when auditing code evalution in an app server connected to a prod environment.
end | ||
end | ||
|
||
describe "code evaluation logging" do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jonatanklosko, this is the first time I'm writing tests related to creating notebooks, cells, and apps. Please check if the code is okay.
session_mode: session_mode, | ||
code: metadata_code, | ||
event: "code.evaluate" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, you should log metadata itself:
iex(1)> require Logger
iex(2)> Logger.info %{foo: 123}
So you don't duplicate the code in the message and metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! I'll do that.
app_spec = Livebook.Apps.NotebookAppSpec.new(notebook) | ||
deployer_pid = Livebook.Apps.Deployer.local_deployer() | ||
ref = Livebook.Apps.Deployer.deploy_monitor(deployer_pid, app_spec, permanent: true) | ||
|
||
app_pid = | ||
receive do | ||
{:deploy_result, ^ref, {:ok, pid}} -> | ||
Process.demonitor(ref, [:flush]) | ||
pid | ||
end | ||
|
||
session_id = App.get_session_id(app_pid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use this:
livebook/test/support/app_helpers.ex
Line 6 in 68c73db
def deploy_notebook_sync(notebook) do |
We can add options argument to allow permanent: true
.
case cell.source do | ||
source when is_binary(source) -> source | ||
_ -> inspected_code | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The source should always be a string, so we don't need this case.
I started working on docs for our "audit logs" feature, and I noticed we could make some improvements.
Structured log messages when evaluating code
The goal is to emit structured data instead of strings whenever possible, making it easier to filter code evaluation log entries and parse its data in an external log aggregator.
With that PR in place, one could run Livebook with those env vars:
MIX_ENV=prod LIVEBOOK_LOG_LEVEL=info \ LIVEBOOK_LOG_METADATA="users,session_mode,code,event" \ LIVEBOOK_LOG_FORMAT=json \ mix phx.server
To get a nice structured log message when a code is evaluated:
{"message":"Evaluating code\n Session mode: default\n Code: \"7 + 13\"","time":"2025-10-03T17:48:22.238Z","metadata":{"code":"7 + 13","session_mode":"default","event":"code.evaluate","users":[{"id":"1","name":"Hugo Baraúna"}]},"severity":"info"}
Notice I kept "duplicated" data inside the "message" part of the log, that's now inside the
session_mode
andcode
metadata on purpose. That's to avoid a "breaking change". Let's say someone is already relying on the value of "message" for extracting code evaluation logs. We can deprecate that from the message and later move only to metadata.Conditionally log code evaluation
Another change is that code evaluation is now only logged when the code is being evaluated in the context of a regular notebook session or an app preview session.
The rationale is that the purpose of logging code evaluation is to have a way to see who ran what code and when. This is relevant when someone is running code in a Livebook app server against a production environment.
However, logging code evaluation is arguably not relevant when the code is inside a deployed Livebook used by an internal user, since that user is not choosing which code to run—they are using an app or code deployed by someone else.