Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime/trace: flight recording #63185

Open
mknyszek opened this issue Sep 24, 2023 · 70 comments
Open

runtime/trace: flight recording #63185

mknyszek opened this issue Sep 24, 2023 · 70 comments

Comments

@mknyszek
Copy link
Contributor

mknyszek commented Sep 24, 2023

Proposal: runtime/trace flight recording

Updated: 23 September 2023

Background

"Flight recording" is a technique in which trace data is kept in a conceptual circular buffer, flushed upon request. The purpose of this technique is to capture traces of interesting program behavior, even when one does not know ahead of time when that will happen. For instance, if the web service fails a health check, or the web service takes an abnormally long time to handle a request. Specifically, the web service can identify such conditions when they happen, but the programmer setting up the environment can't predict when exactly they will occur. Starting tracing after something interesting happens also tends not to be useful, because the program has already executed the interesting part.

The Java ecosystem has had this for years through Java's flight recorder. Once the JVM's flight recorder is enabled, the JVM can obtain a trace representing the last few seconds of time. This trace can come from triggers set up in JMX, or by passing a flag to the JVM that dumps a trace on exit.

With the implementation of #60773 now approaching a stable point, hopefully in Go 1.22 we'll have all traces become series of self-contained partitions. This implementation change presents an opportunity to easily add something similar to the Go execution tracer by always retaining at least one partition that can be snapshotted at any time.

This is also enabled by work in the Go 1.21 release to make traces dramatically cheaper. Because flight recording relies on waiting until something interesting happens, tracing needs to be enabled for a much longer period of time. Enabling flight recording across, for example, a small portion of a production fleet, becomes much more palatable when the tracing itself isn't too expensive.

Design

The core of the design is a new API in the runtime/trace package to enable flight recording. This means that programs can be instrumented with their own triggers.

package trace

// FlightRecorder represents a flight recording configuration.
//
// Flight recording can be thought of as a moving window over
// the trace data, with the window always containing the most
// recent trace data.
//
// Only one flight recording may be active at any given time.
// This restriction may be removed in the future.
type FlightRecorder struct {
    ...
}

// NewFlightRecorder creates a new flight recording configuration.
func NewFlightRecorder() *FlightRecorder

// SetMinAge sets a lower bound on the age of an event in the flight recorder's window.
//
// The flight recorder will strive to promptly discard events older than the minimum age,
// but older events may appear in the window snapshot. The age setting will always be
// overridden by SetMaxSize.
//
// The initial minimum age is implementation defined, but can be assumed to be on the order
// of seconds.
//
// Adjustments to this value will not apply to an active flight recorder.
func (*FlightRecorder) SetMinAge(d time.Duration)

// MinAge returns the current MinAge setting.
func (*FlightRecorder) MinAge() time.Duration

// SetMaxSize sets an upper bound on the size of the window in bytes.
//
// This setting takes precedence over SetMinAge.
// However, it does not make any guarantees on the size of the data WriteTo will write,
// nor does it guarantee memory overheads will always stay below MaxBytes. Treat it
// as a hint.
//
// The initial size is implementation defined.
//
// Adjustments to this value will not apply to an active flight recorder.
func (*FlightRecorder) SetMaxBytes(bytes uint64)

// MaxBytes returns the current MaxBytes setting.
func (*FlightRecorder) MaxBytes() uint64

// Start begins flight recording. Only one flight recorder and one call to trace.Start may be active
// at any given time. Returns an error if starting the flight recorder would violate this rule.
func (*FlightRecorder) Start() error

// Stop ends flight recording. It waits until any concurrent WriteTo calls exit.
// Returns an error if the flight recorder is inactive.
func (*FlightRecorder) Stop() error

// Enabled returns true if the flight recorder is active. Specifically, it will return true if Start did
// not return an error, and Stop has not yet been called.
// It is safe to call from multiple goroutines simultaneously.
func (*FlightRecorder) Enabled() bool

// WriteTo takes a snapshots of the circular buffer's contents and writes the execution data to w.
// Returns the number of bytes written and an error. Only one goroutine may execute WriteTo at a time. 
// An error is returned upon failure to write to w, if another WriteTo call is already in-progress,
// or if the flight recorder is inactive.
func (*FlightRecorder) WriteTo(w io.Writer) (n int64, err error)

Implementation

Most of the implementation work has already been done. I think it would be OK to ship the implementation in golang.org/x/exp/trace, even though it has some efficiencies (like, having to copy buffers outside the runtime). We could make this more efficient by reference-counting the runtime buffers to avoid a copy.

The main thing that definitely needs to change, however, is that the flight recorder needs to be able to run simultaneously with a call to trace.Start, which is currently not possible, since golang.org/x/exp/trace uses trace.Start itself. This can be implemented by having the trace reader goroutine (misnomer) write buffers to multiple writers. All we need to do is call traceAdvance simultaneously with adding a new writer, so that the new writer always starts receiving data on a generation boundary.

Discussion

SetMinAge and SetMaxBytes could give more rigid guarantees, but it's both complex to implement and not terribly useful. The primary use-case for SetMinAge is to allow users to ask for longer traces (for example, if a web service's "long request" means something much longer than a handful of seconds). Meanwhile the primary use-case of SetMaxBytes is to control memory overheads and limit the damage caused by a large SetMinAge.

WriteTo could allow multiple goroutines to call it since it could easily serialize them internally. However, this can create some bad situations. For instance, consider some snapshot trigger condition that causes multiple goroutines to call WriteTo. The call is heavyweight and they'll queue up behind each other; the longest one will likely take quite a while to resolve, and the application will be significantly disrupted. It'll also produce traces that aren't very useful (consisting of short partitions corresponding approximately to the duration of the last WriteTo call) unless we also allow for multiple goroutines to read the same partition's buffers. However, that's going to be fairly complicated to implement, and also doesn't really add much value either, since it's just duplicate data. The current design encourages callers reduces the risk of run-time panics while also side-stepping these issues by returning an error in this case.

Alternatives considered

External circular buffer

@dominikh has suggested adding a similar feature to gotraceui. Because the partitioning is actually baked into the trace's structure, it's possible for trace consumers to implement something similar themselves. The only thing missing is a standard streaming endpoint for execution traces (see follow-up work).

However, there are a few advantages to adding support directly to the standard library and runtime.

  • It opens the door to taking trace snapshots before a program crashes, which is going to be tricky to make work in general from outside the runtime.
  • It's more efficient in that no trace data needs to be processed (or sent to another process, or over the network) until it's actually needed.
  • Any external circular buffer has to make decisions solely on trace content. There's a minor ease-of-use improvement for those doing ad-hoc debugging, since they don't have to model their conditions in the runtime/trace package's annotations, but can instead decide when to grab a snapshot directly.
  • The possibility for more control over trace partitioning (via SetPeriod and SetSize). Any external solution will be at the mercy of the runtime's defaults.

Despite these advantages, it's likely worth pursuing support for such a use-case even if the API described in this proposal is made available. A shortcoming of this document's proposal is that there's no way to trigger a snapshot explicitly against trace data, only program logic. Handling traces externally also means the ability to perform ad-hoc analyses without the need for additional instrumentation.

Follow-up work

Add support for trace streaming endpoints to net/http/pprof

As stated in the discussion of the "external circular buffer" alternative, we could support that alternative easily and well by just adding a standard debugging endpoint for streaming trace data. It probably makes the most sense to just add new query parameters to the existing trace endpoint; the details of that can be hashed out elsewhere.

@mknyszek
Copy link
Contributor Author

CC @aclements @prattmic @felixge @nsrip-dd @dominikh @bboreham @rhysh

@dominikh
Copy link
Member

I agree with all of the advantages of having flight recording in the runtime.

As for

A shortcoming of this document's proposal is that there's no way to trigger a snapshot explicitly against trace data, only program logic

I think that program logic will be the far more useful trigger for the majority of use cases, and certainly easier to make use of.
Program logic can also make use of all the exposed runtime metrics, bringing it closer to the power of triggering on trace data.

On top of that, this flight recorder will be much more straightforward to integrate with existing telemetry solutions, compared to external solutions, which will either not be designed for that (Gotraceui) or will have to be written first.

@bboreham
Copy link
Contributor

It opens the door to taking trace snapshots before a program crashes

Just to say the most frequent "I wish I knew what happened just before" for me is OOM, which isn't (AFAIK) something you can trigger on once the kernel makes its decision. Maybe we could trigger on hitting GOMEMLIMIT?

@rhysh
Copy link
Contributor

rhysh commented Sep 24, 2023

Nice!

How can we get CPU profile samples to show up in the flight-recorder data stream? The existing APIs for execution traces and CPU profiles have a start time and an end time, so those two pair well together. For this it seems that we'd need a way to ask for SIGPROF deliveries during the whole time the flight recorder is running, without preventing normal on-demand use of runtime.StartCPUProfile.

The panics in WriteTo look like they'd force those to coordinate with calls to Stop (to avoid the "flight recorder isn't running" panic). Could they instead return errors if there's another WriteTo call, and end early (with an error) if there's a concurrent Stop call? Panic risk like this makes me nervous; I think instrumentation in general should Do No Harm.

consisting of short partitions corresponding approximately to the duration of the last WriteTo call

You've said that the data from WriteTo calls won't overlap. But on the other side, do the partitions in the new data format include identifiers (partition sequence number $X from thread/M $Y, and a list of all Ms?) that allows stitching together traces that are known to have no gaps from a set of shorter WriteTo calls?

@ianlancetaylor ianlancetaylor moved this to Incoming in Proposals Sep 26, 2023
@mknyszek
Copy link
Contributor Author

Just to say the most frequent "I wish I knew what happened just before" for me is OOM, which isn't (AFAIK) something you can trigger on once the kernel makes its decision. Maybe we could trigger on hitting GOMEMLIMIT?

It turns out that's quite difficult to do because Linux provides no opportunity to dump any information when an OOM occurs; the OOM killer simply SIGKILLs processes. Unfortunately GOMEMLIMIT also doesn't help, because GOGC=off GOMEMLIMIT=<value> is a valid configuration. In that configuration, the program is always at GOMEMLIMIT from the perspective of the operating system.

With that being said, there are still some best-effort things we might be able to do. Programs that don't use GOMEMLIMIT, can look at their own runtime/metrics stats and dump a trace when it total memory use (the same expression that the GOMEMLIMIT machinery is using to account for total memory) exceeds some threshold. For programs that do use GOMEMLIMIT, it's possible to use the new live heap metric to do something similiar. Something that doesn't account for newly allocated data, which will give an indication of how close the live memory of the program is to the limit.

Either way though, it's still best-effort. Unless the snapshotting fully stops the world, the program may continue executing and OOM before the trace actually gets fully dumped. (Even then, it's still possible (but less likely) that it OOMs before the runtime successfully stops every running goroutine.)

Once upon a time there was a patch proposed to Linux to allow for halting a process when it hit container limits, so that another process in a tiny memory reservation could inspect it. One could imagine that if this other process created a core dump of the halted process, we could write some code to extract any active trace buffers from the core dump into a trace.

I'm not sure where this leaves us. Perhaps it suggests that WriteTo should really stop-the-world, and do all the copying during the stop-the-world, so that these difficult cases are more deterministic. But that hurts other cases like "I just want to know why my request is taking a long time sometimes," since it has a bigger run-time performance impact, especially if your intent is to capture multiple such cases (i.e. leave it running in production for a while) and aggregate them. And in the end, it's also just a patch on the broader problem which is that it's surprisingly hard to just get information about an OOM on Linux to begin with.

@mknyszek

This comment was marked as off-topic.

@mknyszek mknyszek reopened this Sep 26, 2023
@mknyszek
Copy link
Contributor Author

mknyszek commented Sep 26, 2023

Nice!

How can we get CPU profile samples to show up in the flight-recorder data stream? The existing APIs for execution traces and CPU profiles have a start time and an end time, so those two pair well together. For this it seems that we'd need a way to ask for SIGPROF deliveries during the whole time the flight recorder is running, without preventing normal on-demand use of runtime.StartCPUProfile.

If you happen to have CPU profiling running, it'll just work, but you make a good point that there's no good way to have it included all the time (i.e. no intention of producing a CPU profile). It seems to me like that should maybe be another option, either on the FlightRecorder, or on some new CPU profile configuration value. I'm not sure where it would be better to add this so that it composes well. My gut tells me it should go on the CPU profile API. (One could imagine that CPU profiling could have a similar mode, where the profiling data is just kept in the buffer and snapshotted every once in a while. That would mirror flight recording, just like trace.Start mirrors StartCPUProfile (as you point out).)

The panics in WriteTo look like they'd force those to coordinate with calls to Stop (to avoid the "flight recorder isn't running" panic). Could they instead return errors if there's another WriteTo call, and end early (with an error) if there's a concurrent Stop call? Panic risk like this makes me nervous; I think instrumentation in general should Do No Harm.

Hm, that's a good point. I'll update the API to return an error on Stop and update the reasons WriteTo could fail.

consisting of short partitions corresponding approximately to the duration of the last WriteTo call

You've said that the data from WriteTo calls won't overlap. But on the other side, do the partitions in the new data format include identifiers (partition sequence number $X from thread/M $Y, and a list of all Ms?) that allows stitching together traces that are known to have no gaps from a set of shorter WriteTo calls?

There's currently no list of all the Ms per generation because we don't have M events, but yes, everything within a partition is namespaced by that partition's "generation number." The proposed trace parsing API exposes partition changes as a Sync event, so it'll be possible to identify when it happens. The number won't be exposed, but users of the API can assign their own identifiers to each partition. (If you're parsing the trace directly, then yeah the generation number will be available.)

FWIW, the trace parsing API already does this exact kind of "stitching." Every partition is an entirely self-contained trace, which means all goroutines (and Ps) and their statuses get named in every partition. The trace parser uses this information to validate the stitching: new partitions' goroutine statuses need to match where that goroutine left off in the previous partition.

@rsc rsc moved this from Incoming to Active in Proposals Nov 2, 2023
@rsc
Copy link
Contributor

rsc commented Nov 2, 2023

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@rsc
Copy link
Contributor

rsc commented Dec 6, 2023

I think this is waiting on an implementation and experience from using that implementation.

@mknyszek
Copy link
Contributor Author

mknyszek commented Dec 6, 2023

That's correct. I plan to have one in golang.org/x/exp/trace soon.

@mknyszek
Copy link
Contributor Author

mknyszek commented Jan 3, 2024

I made a mistake in the commit message so gopherbot couldn't connect the CL to this issue, but the experimental API has now landed via https://go.dev/cl/550257. It is available in golang.org/x/exp/trace for Go 1.22 only.

There are a few caveats with this implementation that will not be true with a real runtime implementation.

  • Higher CPU overhead: because this implementation lives outside the runtime, it needs to keep a copy of the latest trace data. Copying that data should be less expensive than writing out trace data via HTTP, so I still generally expect CPU overheads to be about the same as just having tracing enabled, which is good. It can get slightly better than that with a runtime implementation.
  • Higher memory overhead: again, because we need to keep a copy of the latest trace data, expect the memory overheads to be higher than what the actual implementation can achieve. It's probably not a full 2x because the runtime will have to keep trace data around longer itself, but the golang.org/x/exp/trace implementation has to wait a little longer to throw out old trace data. I wouldn't expect this to be more than a few MiB for most applications, though.
  • Higher snapshot latency: the latency of the actual snapshot operation (WriteTo) is higher because the implementation actually needs to make 2 consecutive global buffer flushes (the results of both appear in the output, so in practice it shouldn't make a difference for analysis). This is because the end of a trace generation isn't explicitly signaled but derived from new trace data. This was perhaps a mistake in the format that can be fixed in a backwards-compatible way, but the runtime implementation has complete knowledge of when a generation starts and ends, so this won't be a problem in the future.

I don't think any of these are significant enough to detract from the usefulness of the experiment, but I wanted to bring it up in case one of these does become an issue. We can also explore ways to improve the experiment to make it more representative, if one of them is indeed a problem.

Please give it a try!

@martin-sucha
Copy link
Contributor

The x/exp/trace flight recorder seems to work nicely for me so far. Thanks for adding it!

One limitation I encountered is that the runtime/trace package does not allow multiple trace.Start calls at the same time, so I needed to disable capturing runtime traces in the DataDog library with DD_PROFILING_EXECUTION_TRACE_ENABLED=false env variable. I wonder if we could enable multiple consumers of the trace data at the same time. cc @felixge

As for preserving traces after OOM, one possible approach could be to use a separate agent process with a shared memory buffer to write the traces into. That agent could then detect that the main process crashed and write the contents of the memory buffer into a file or other storage. However, this would require keeping all the flight recorder data in a single pre-allocated buffer.

@mknyszek
Copy link
Contributor Author

Thanks for trying it out!

One limitation I encountered is that the runtime/trace package does not allow multiple trace.Start calls at the same time, so I needed to disable capturing runtime traces in the DataDog library with DD_PROFILING_EXECUTION_TRACE_ENABLED=false env variable. I wonder if we could enable multiple consumers of the trace data at the same time.

Yeah, that's a limitation of the experiment which lives out-of-tree for convenience, but it technically shouldn't be necessary if it's implemented in the runtime. I'd hoped to allow one of each type of consumer (one active flight recorder, one active trace.Start).

@rhysh
Copy link
Contributor

rhysh commented Jun 25, 2024

I wonder if we could enable multiple consumers of the trace data at the same time.

@mknyszek , we've discussed allowing the caller to control the set of events that the trace includes. That would enable runtime maintainers to use the execution trace to grant access to events that are niche and expensive, or in the other direction would enable low-overhead access to a subset of events.

Maybe this is how we build/improve delta profiles for #67942, and how we give access to goroutine tags/labels in monotonic profiles (heap, mutex, etc) for #23458.

If that's the direction we go, I think we'll want either a/ multiple trace.Start consumers (trace.(*Config).Start?), each with their own set of events or b/ a way to change the set of events without restarting the trace, so a user-space package like x/exp/trace can be the single subscriber on behalf of multiple others within the program.

@mknyszek
Copy link
Contributor Author

Multiple trace.Start consumers isn't too hard to do (each new trace.Start registers another place for the trace reader goroutine to write to, and calls traceAdvance if the trace is already running to get a consistent starting point). But customizing each one is harder.

The only way I can imagine this working efficiently is via something like https://go.dev/cl/594595. TL;DR: We have some light infrastructure for emitting "experimental" events now, and this CL formalizes that a bit better by allowing us to split the event stream. It's at least a step toward that. If we have separate buffer streams, you could imagine a new registration consists of an io.Writer and a bitmask indicating which streams to accept data from (internally).

It's certainly not off the table, but I think we're a little ways off still.

@arianvp
Copy link

arianvp commented Jul 8, 2024

It turns out that's quite difficult to do because Linux provides no opportunity to dump any information when an OOM occurs; the OOM killer simply SIGKILLs processes. Unfortunately GOMEMLIMIT also doesn't help, because GOGC=off GOMEMLIMIT= is a valid configuration. In that configuration, the program is always at GOMEMLIMIT from the perspective of the operating system.

I'd like to point out that in Linux it's now possible to be actively informed by the kernel through an epoll event if a control group is under memory pressure through the PSI subsystem.

https://systemd.io/MEMORY_PRESSURE/
https://docs.kernel.org/accounting/psi.html#monitoring-for-pressure-thresholds

This could be used as a signal to dump a trace and/or to trigger GC proactively.

@rsc
Copy link
Contributor

rsc commented Jul 24, 2024

This has been "waiting for implementation experience". Do we have enough experience to think about whether to add this to Go 1.24?

@mknyszek
Copy link
Contributor Author

We've had some experience with flight recording inside Google, and we've also gotten some good feedback from @martin-sucha.

I think a few things are clear from our experiences so far:

  • The feature is useful. We've used it to peer at program behavior that was quite difficult to capture before.
  • Allowing at least one flight recorder and one trace.Start caller is crucial to smooth usage.
  • Control over the size of the recorder's window is also important, since a good size tends to be application-dependent and has a cost.
  • Rolling this out would likely be helped by net/http/pprof integration, which is going to be a little complex but possible. (But that likely needs its own proposal.)

Much of this can be refined and improved once we have an implementation that lives fully inside the runtime.

Lastly, from my point of view, I don't know what other feedback we would be waiting on that this point.

CC @cagedmantis

@mknyszek
Copy link
Contributor Author

mknyszek commented Dec 16, 2024

  • Does it cut off an ongoing generation immediately, or does it wait until it finishes? We'd prefer the former as it would make flight recording more useful for the "something bad happened and we might be crashing soon" use case.

Yes, it cuts off an on-going generation immediately so you have up-to-date data. I don't think that's necessary to document, really, since I think that's the most intuitive and useful behavior it could have. Though I guess we could say something like "best-effort up-to-date data as of when WriteTo returns" so nobody has to second-guess it.

  • What happens when calling WriteTo() on the same flight recorder twice? Do the two recordings overlap? And is that behavior the same when multiple flight recorders are used?

Calling multiple WriteTo's from different flight recorders is fine in general, the only problem is that they have to queue behind each other to cut a new generation, though we could be a bit smarter about that. For example if traceAdvance is just about to cut a new generation, then both calls can count that as a recent enough cut. But I think if traceAdvance is past the cut-off point (updating trace.gen) then one should probably wait for the other, because it's more likely to be stale. (This is kind of how concurrent calls to runtime.GC work.)

Multiple WriteTo's are currently disallowed on the same FlightRecorder (it's a returned error) for two reasons.
1. I think of a single FlightRecorder as "targeting" some kind of specific event. So returning an error allows deduplicating multiple WriteTo's covering the same time period for the same FlightRecorder (take the first one). (I'm not sure how useful this is, but it seems useful to not accidentally generate N very similar dumps all at the same time.)
2. To avoid complicated synchronization on the FlightRecorder's internals.

@aclements
Copy link
Member

@mknyszek , would you mind posting the latest version of the full proposed API as a self-contained comment? Thanks!

andrewbaptist added a commit to andrewbaptist/cockroach that referenced this issue Dec 26, 2024
This commit adds the ability to capture execution traces from the past
few seconds of execution when something seems wrong. Often when a timer
fires and we detect something is wrong, the relevant information is
already lost. The new flight recorder in go
golang/go#63185 creates a ring buffer that
enables capturing these traces. This commit adds the capability to
capture traces but doesn't enable it anywhere.

There is a small performance cost of having the flight recorder always
enabled, so some performance testing is required to determine if we need
to protect this behind a cluster setting.

Epic: none

Release note: None
andrewbaptist added a commit to andrewbaptist/cockroach that referenced this issue Dec 27, 2024
This commit adds the ability to capture execution traces from the past
few seconds of execution when something seems wrong. Often when a timer
fires and we detect something is wrong, the relevant information is
already lost. The new flight recorder in go
golang/go#63185 creates a ring buffer that
enables capturing these traces. This commit adds the capability to
capture traces but doesn't enable it anywhere.

There is a small performance cost of having the flight recorder always
enabled, so some performance testing is required to determine if we need
to protect this behind a cluster setting.

Epic: none

Release note: None
@aclements
Copy link
Member

@mknyszek , would you mind posting the latest version of the full proposed API as a self-contained comment? Thanks!

Bump 😊

@mknyszek
Copy link
Contributor Author

mknyszek commented Jan 8, 2025

package trace

// FlightRecorder represents a single consumer of a Go execution
// trace.
// It tracks a moving window over the execution trace produced by
// the runtime, always containing the most recent trace data.
//
// At most one flight recorder may be active at any given time,
// though flight recording is allowed to be concurrently active
// with a trace consumer using trace.Start.
// This restriction of only a single flight recorder may be removed
// in the future.
type FlightRecorder struct {
    ...
}

// NewFlightRecorder creates a new flight recorder from the provided configuration.
func NewFlightRecorder(cfg FlightRecorderConfig) *FlightRecorder

// Start activates the flight recorder and begins recording trace data.
// Currently, only one flight recorder and one call to trace.Start may be
// active simultaneously.
// This restriction may be lifted in the future.
// Returns an error if Start is called multiple times on the same
// FlightRecorder, or if Start would cause there to be more
// simultaneously active trace consumers than is currently supported.
func (*FlightRecorder) Start() error

// Stop ends recording of trace data. It blocks until any concurrent WriteTo calls complete.
func (*FlightRecorder) Stop()

// Enabled returns true if the flight recorder is active.
// Specifically, it will return true if Start did not return an error, and Stop has not yet been called.
// It is safe to call from multiple goroutines simultaneously.
func (*FlightRecorder) Enabled() bool

// WriteTo snapshots the moving window tracked by the flight recorder.
// The snapshot is expected to contain data that is up-to-date as of when WriteTo is called,
// though this is not a hard guarantee.
// Only one goroutine may execute WriteTo at a time. 
// An error is returned upon failure to write to w, if another WriteTo call is already in-progress,
// or if the flight recorder is inactive.
func (*FlightRecorder) WriteTo(w io.Writer) (n int64, err error)

type FlightRecorderConfig struct {
	// MinAge is a lower bound on the age of an event in the flight recorder's window.
	//
	// The flight recorder will strive to promptly discard events older than the minimum age,
	// but older events may appear in the window snapshot. The age setting will always be
	// overridden by MaxSize.
	//
	// If this is 0, the minimum age is implementation defined, but can be assumed to be on the order
	// of seconds.
	MinAge time.Duration

	// MaxBytes is an upper bound on the size of the window in bytes.
	//
	// This setting takes precedence over MinAge.
	// However, it does not make any guarantees on the size of the data WriteTo will write,
	// nor does it guarantee memory overheads will always stay below MaxBytes. Treat it
	// as a hint.
	//
	// If this is 0, the maximum size is implementation defined.
	MaxBytes uint64
}

@martin-sucha
Copy link
Contributor

Looks good, thanks! Just a nitpick, there is MaxSize in the doc comment, but MaxBytes in the field name.

@meling
Copy link

meling commented Jan 12, 2025

I wonder what a user would do with the error from Start and Stop? Could "redundant" calls just be ignored, avoiding the need to return an error? Users can always check if it is running with Enabled.

@dominikh

This comment was marked as outdated.

@aclements
Copy link
Member

// Each FlightRecorder should represent some specific behavior
// that the caller is trying to capture about their program.
// For instance, recording a trace of a slow RPC response from a
// specific endpoint.

I'm not sure what this means. Given that you snapshot a flight recorder after something unexpected has happened, so it needs to already be running, so how can you have a flight recorder for each behavior?

// Start activates the flight recorder and begins consuming trace data.

"Consuming" seems like an implementation detail. "Recording"?

I wonder what a user would do with the error from Start and Stop? Could "redundant" calls just be ignored, avoiding the need to return an error? Users can always check if it is running with Enabled.

I could imagine other error categories from Start but I'm not sure what you do with an error from Stop. Maybe it could just be a silent no-op if the recorder isn't started?

@mknyszek
Copy link
Contributor Author

I'm not sure what this means. Given that you snapshot a flight recorder after something unexpected has happened, so it needs to already be running, so how can you have a flight recorder for each behavior?

I think there's a misunderstanding here. It was just supposed to be some advice as to how to actually use it, but it imagines a world where you can have multiple independent FlightRecorder instances all enabled. When some exceptional behavior occurs, you have to decide which FlightRecorder to call WriteTo, and the idea is you have one FlightRecorder instance to track different unexpected behaviors, so that they compose. This way, independent packages don't have to think about finding the single global FlightRecorder to call WriteTo, but rather all create their own FlightRecorder. (Under the hood all these independent FlightRecorder instances could share a ring buffer whose parameters are maximally permissive across all instances.)

Anyway, such a world will not exist to begin with, so I'll remove that from the API doc.

"Consuming" seems like an implementation detail. "Recording"?

Sure, will update.

I could imagine other error categories from Start but I'm not sure what you do with an error from Stop. Maybe it could just be a silent no-op if the recorder isn't started?

I agree. I'll remove the error.

@mknyszek
Copy link
Contributor Author

I have updated #63185 (comment) to include the latest feedback.

@mknyszek
Copy link
Contributor Author

I wonder what a user would do with the error from Start and Stop? Could "redundant" calls just be ignored, avoiding the need to return an error? Users can always check if it is running with Enabled.

For Stop, I agree. For Start, I think it's better to notify when that fails ASAP. WriteTo can fail instead, or we can ask people to check Enabled, but that feels more awkward than just letting callers know immediately that this flight recorder won't work. (There are also other errors that can occur in Start, such as calling Start twice on the same FlightRecorder. We could be loose about that, but I don't see any particular advantage to doing so. Requiring matching Start and Stop calls feels less messy for everyone. I updated the doc to mention it.)

andrewbaptist added a commit to andrewbaptist/cockroach that referenced this issue Jan 22, 2025
This commit adds the ability to capture execution traces from the past
few seconds of execution when something seems wrong. Often when a timer
fires and we detect something is wrong, the relevant information is
already lost. The new flight recorder in go
golang/go#63185 creates a ring buffer that
enables capturing these traces. This commit adds the capability to
capture traces but doesn't enable it anywhere.

There is a small performance cost of having the flight recorder always
enabled, so some performance testing is required to determine if we need
to protect this behind a cluster setting.

Epic: none

Release note: None
@aclements
Copy link
Member

In Start, the documentation is currently pretty strict about it returning an error if a flight recorder is already active. Since we may want to lift that restriction in the future, let's be less strict about that wording.

@mknyszek
Copy link
Contributor Author

I loosened up the language in the proposed API docs for Start. It's a little clunky but I think it gets the point across, and we can workshop it in review.

@cherrymui
Copy link
Member

cherrymui commented Jan 23, 2025

A minor point: would it be better for FlightRecorderConfig.MaxBytes to be uintptr? The value shouldn't be larger than the address space. On the other hand, uint64 matches WriteTo.

@aclements
Copy link
Member

Based on the discussion above, this proposal seems like a likely accept.
— aclements for the proposal review group

The proposal is to add support for a trace flight recorder to the trace package:

package trace

// FlightRecorder represents a single consumer of a Go execution
// trace.
// It tracks a moving window over the execution trace produced by
// the runtime, always containing the most recent trace data.
//
// At most one flight recorder may be active at any given time,
// though flight recording is allowed to be concurrently active
// with a trace consumer using trace.Start.
// This restriction of only a single flight recorder may be removed
// in the future.
type FlightRecorder struct {
    ...
}

// NewFlightRecorder creates a new flight recorder from the provided configuration.
func NewFlightRecorder(cfg FlightRecorderConfig) *FlightRecorder

// Start activates the flight recorder and begins recording trace data.
// Only one call to trace.Start may be active at any given time.
// In addition, currently only one flight recorder may be active in the program.
// Returns an error if the flight recorder cannot be started or is already started.
func (*FlightRecorder) Start() error

// Stop ends recording of trace data. It blocks until any concurrent WriteTo calls complete.
func (*FlightRecorder) Stop()

// Enabled returns true if the flight recorder is active.
// Specifically, it will return true if Start did not return an error, and Stop has not yet been called.
// It is safe to call from multiple goroutines simultaneously.
func (*FlightRecorder) Enabled() bool

// WriteTo snapshots the moving window tracked by the flight recorder.
// The snapshot is expected to contain data that is up-to-date as of when WriteTo is called,
// though this is not a hard guarantee.
// Only one goroutine may execute WriteTo at a time. 
// An error is returned upon failure to write to w, if another WriteTo call is already in-progress,
// or if the flight recorder is inactive.
func (*FlightRecorder) WriteTo(w io.Writer) (n int64, err error)

type FlightRecorderConfig struct {
	// MinAge is a lower bound on the age of an event in the flight recorder's window.
	//
	// The flight recorder will strive to promptly discard events older than the minimum age,
	// but older events may appear in the window snapshot. The age setting will always be
	// overridden by MaxSize.
	//
	// If this is 0, the minimum age is implementation defined, but can be assumed to be on the order
	// of seconds.
	MinAge time.Duration

	// MaxBytes is an upper bound on the size of the window in bytes.
	//
	// This setting takes precedence over MinAge.
	// However, it does not make any guarantees on the size of the data WriteTo will write,
	// nor does it guarantee memory overheads will always stay below MaxBytes. Treat it
	// as a hint.
	//
	// If this is 0, the maximum size is implementation defined.
	MaxBytes uint64
}

@aclements aclements moved this from Active to Likely Accept in Proposals Jan 23, 2025
@mknyszek
Copy link
Contributor Author

A minor point: would it be better for FlightRecorderConfig. MaxBytes to be uintptr? The value shouldn't be larger than the address space. On the other hand, uint64 matches WriteTo.

My feeling is that we tend to use uint64 or int64 to mean bytes, which is why I picked this. Then again, when used in some interfaces, it needs to be larger than the address space because IIUC you can have a >4 GiB file on disk on 32-bit systems.

And of course you're right, this is totally in-memory. int or uint is a little friendlier than uintptr and works similarly today.

I don't feel very strongly about this either way, since I don't have a strong sense of what the norms are. Signed integers are a little annoying because then we have to say what negative means (probably also "implementation defined"), but we use int everywhere already and it's not a big deal.

@cherrymui
Copy link
Member

I don't feel strongly about it either. uint64 is probably fine.

@aclements aclements moved this from Likely Accept to Accepted in Proposals Jan 28, 2025
@aclements
Copy link
Member

No change in consensus, so accepted. 🎉
This issue now tracks the work of implementing the proposal.
— aclements for the proposal review group

The proposal is to add support for a trace flight recorder to the trace package:

package trace

// FlightRecorder represents a single consumer of a Go execution
// trace.
// It tracks a moving window over the execution trace produced by
// the runtime, always containing the most recent trace data.
//
// At most one flight recorder may be active at any given time,
// though flight recording is allowed to be concurrently active
// with a trace consumer using trace.Start.
// This restriction of only a single flight recorder may be removed
// in the future.
type FlightRecorder struct {
    ...
}

// NewFlightRecorder creates a new flight recorder from the provided configuration.
func NewFlightRecorder(cfg FlightRecorderConfig) *FlightRecorder

// Start activates the flight recorder and begins recording trace data.
// Only one call to trace.Start may be active at any given time.
// In addition, currently only one flight recorder may be active in the program.
// Returns an error if the flight recorder cannot be started or is already started.
func (*FlightRecorder) Start() error

// Stop ends recording of trace data. It blocks until any concurrent WriteTo calls complete.
func (*FlightRecorder) Stop()

// Enabled returns true if the flight recorder is active.
// Specifically, it will return true if Start did not return an error, and Stop has not yet been called.
// It is safe to call from multiple goroutines simultaneously.
func (*FlightRecorder) Enabled() bool

// WriteTo snapshots the moving window tracked by the flight recorder.
// The snapshot is expected to contain data that is up-to-date as of when WriteTo is called,
// though this is not a hard guarantee.
// Only one goroutine may execute WriteTo at a time. 
// An error is returned upon failure to write to w, if another WriteTo call is already in-progress,
// or if the flight recorder is inactive.
func (*FlightRecorder) WriteTo(w io.Writer) (n int64, err error)

type FlightRecorderConfig struct {
	// MinAge is a lower bound on the age of an event in the flight recorder's window.
	//
	// The flight recorder will strive to promptly discard events older than the minimum age,
	// but older events may appear in the window snapshot. The age setting will always be
	// overridden by MaxSize.
	//
	// If this is 0, the minimum age is implementation defined, but can be assumed to be on the order
	// of seconds.
	MinAge time.Duration

	// MaxBytes is an upper bound on the size of the window in bytes.
	//
	// This setting takes precedence over MinAge.
	// However, it does not make any guarantees on the size of the data WriteTo will write,
	// nor does it guarantee memory overheads will always stay below MaxBytes. Treat it
	// as a hint.
	//
	// If this is 0, the maximum size is implementation defined.
	MaxBytes uint64
}

@aclements aclements changed the title proposal: runtime/trace: flight recording runtime/trace: flight recording Jan 28, 2025
@aclements aclements modified the milestones: Proposal, Backlog Jan 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Accepted
Development

No branches or pull requests