PacketStream is the data plane running through a lot of icey.
It is how media gets from capture to encoder to network transport. It is how receiver pipelines hand packets to decoders and muxers. It is how the library keeps dataflow explicit instead of hiding it behind callback soup.
If you understand PacketStream, a lot of the rest of icey stops looking like magic.
A PacketStream has three moving parts:
The shape is simple:
source -> processor -> processor -> emitter -> sinksOr, with multiple producers and a queue boundary:
source A --\
-> PacketStream -> AsyncPacketQueue -> encoder -> sink
source B --/That is not a metaphor. That is the actual execution model.
IPacketEverything moving through the graph is an IPacket.
The built-in packet types in base are:
RawPacket for raw bytesFlagPacket for control markers with no payloadIf you need something richer, define your own packet type and implement clone().
That clone() is not decorative. It is how retained or queued paths keep packet ownership honest.
PacketStreamAdapterSources and processors both build on PacketStreamAdapter.
The important part is that adapters now declare their retention behavior:
BorrowedClonedRetainedThat means the graph has an explicit ownership story instead of depending on comments and luck.
PacketProcessorA processor is an adapter with process(IPacket&).
Processors run in ascending order. Lower order runs first.
That makes chain assembly straightforward:
The fastest PacketStream graph is also the simplest one:
In that case the caller keeps storage alive until the whole write or emit call returns.
That stops being true the moment you cross a retention boundary.
This is the rule that matters:
Upstream code may only reuse or free borrowed packet storage after:
Cloned or RetainedThat is the boundary.
In practical terms:
rawPacket(buf, len) is zero-copy if buf is mutablerawPacket(const char*, len) makes an owned copySyncPacketQueue clones before deferred dispatchAsyncPacketQueue clones before hopping to worker-thread processingsynchronizeOutput() inserts a loop-thread queue boundaryIf you are not sure whether some downstream stage is async, make the boundary explicit. Do not rely on "it probably finishes before we reuse this buffer."
A source owns an emitter and pushes packets into the graph.
Typical examples:
If a source supports start and stop, the stream can synchronize its lifecycle with syncState=true.
That matters for capture devices and long-lived media readers. It lets the stream own more than just packet flow; it also owns when production begins and ends.
A processor transforms or filters packets.
Typical examples:
Processors should be boring:
accepts()If they defer work, they need to say so through retention semantics. Hidden async is how clean graphs turn into memory bugs.
Sinks are just slots attached to the stream emitter.
That is one of the reasons PacketStream scales across the library. A sink can be:
PacketStreamThere are two rules worth being blunt about.
PacketStream lets you compose flexible graphs. It does not mean topology should be casual.
The good pattern is:
Then stop and tear down in the reverse direction.
Do not treat topology mutation during active flow as normal control flow.
Processor order is not just a convenience. It is how you say what the pipeline means.
If a graph matters, the order values should read like intent:
0 for queue or thread hop5 for decode10 for transform20 for encode30 for packetizeYou do not need that exact numbering scheme, but you do need deliberate ordering.
The default PacketStream execution model is synchronous in the caller's thread.
That is good for performance and easy to reason about.
When you need a thread hop, do it explicitly.
synchronizeOutput(loop)Use this when:
Internally this inserts a SyncPacketQueue near the end of the graph.
AsyncPacketQueueUse this when:
This is the clean way to move work, not an excuse to make the whole graph vaguely asynchronous.
MediaCapture -> VideoPacketEncoder -> WebRtcTrackSenderWebRtcTrackReceiver -> VideoDecoder -> MultiplexPacketEncodersource session -> encoded packet fanout -> viewer sendersThe same graph model covers all three. That is one of the reasons icey can keep media code relatively coherent.
This is the classic one.
If a packet started as borrowed bytes and then crosses a queue or deferred processor, you need an explicit clone or retained representation before that handoff.
If a processor only works on one packet type, say so in accepts(). Do not make the graph discover that by tripping over a dynamic_cast later.
If you need to restructure the graph, stop it first.
PacketStream as a generic event busIt is a data plane. Use it when packets are actually flowing through a pipeline. Not every callback chain in the codebase needs to become a stream.
icey uses PacketStream in the places where performance and clarity usually fight each other.
The whole point of the abstraction is that you do not have to choose:
That is what gives the library a coherent core instead of five separate async models pretending to be one.