c5ad98adc6
max buffer size way too small
2026-01-12 19:15:11 -05:00
9f690fe27a
Add a warning log when producers are stalled
2026-01-12 18:08:38 -05:00
7bcc0c19aa
Significant speed improvement
2026-01-12 09:37:47 -05:00
72e7df5d5c
Use appendAssumeCapacity instead of appendBounded catch unreachable
...
Basically the same thing.
2026-01-11 12:43:17 -05:00
05ad1b8ffc
Use cache line size for the cpu
2026-01-10 20:46:14 -05:00
8a2bcebd60
Cache align hot buffers
2026-01-10 20:14:42 -05:00
a93a1f0906
Use much higher buffer sizes
2026-01-10 16:49:50 -05:00
0861703ddc
Sleep to go faster
...
The problem was I was basically flushing twice for every message when
doing request reply.
This gives the sender the opportunity to finish writing a full message
to the queue, which we then check for before flushing.
This makes request reply latency benchmarks go down from like 90ms to
200us.
2026-01-10 16:42:43 -05:00
99ea755658
Send should be uncancelable
2026-01-10 11:37:47 -05:00
78b23ee59c
Use Subscription.send
2026-01-10 09:21:18 -05:00
ad13706d1b
Properly handle disconnect
2026-01-10 09:21:18 -05:00
4a228c8dba
switch to uuids for clients
2026-01-10 09:21:18 -05:00
aec871ebdb
Port to latest 0.16.0
...
Use juicy main ;)
2026-01-10 09:21:18 -05:00
0ebc39b5e8
parsing cleanup
2026-01-10 00:12:52 -05:00
f4b545f852
Improve errors in parse API
2026-01-08 23:46:49 -05:00
ed99115969
More robust parsing and error propagation
2026-01-08 22:35:35 -05:00
d8488fde49
support hpub
...
fixed issue where not all data was being sent
request reply has a performance issue but technically works
2026-01-08 16:47:52 -05:00
45feccbad8
WAY FASTER but doesn't send all?
...
Seems to not flush the last message
2026-01-07 23:19:19 -05:00
96a3705069
starting zero alloc parsing
2026-01-07 23:19:19 -05:00
e2a60c9427
Rename to match actual subcommand
2026-01-06 23:11:48 -05:00
3674792e3f
scope parse logger
...
also change internal name to match public name
2026-01-06 23:04:19 -05:00
b6762ccb7c
Cleaner SIGINT handling
...
Use a Mutex to wait for the signal handler to fire instead of checking
an atomic boolean over and over again.
2026-01-06 23:03:21 -05:00
3b490fc3c8
Cleanup Server.zig
2026-01-06 22:27:13 -05:00
4896928352
Major restructuring
...
This makes things much easier to use as a library
2026-01-06 21:59:41 -05:00
cc03631838
Better cancelation handling
...
Based on this conversation with Andrew
https://ziggit.dev/t/am-i-canceling-my-std-io-group-incorrectly/13836
2026-01-06 21:21:14 -05:00
b87412ee66
Restructuring
...
Add a bunch of tests for the client
2026-01-06 20:43:49 -05:00
025a5344c8
Return error.Canceled from concurrent group task
2026-01-06 17:14:18 -05:00
c676a8218e
Support queue groups
2026-01-06 14:06:22 -05:00
81a93654a1
Don't reuse address
...
This was a temporary workaround for when I was not cleanly exiting.
Now that I am, this is not necessary.
2026-01-06 10:20:04 -05:00
6e9f6998bd
Use client allocator to own incoming messages to a client
2026-01-06 10:04:10 -05:00
318d467f5c
Optimize PUB and HPUB parsing
...
This will try to take better advantage of the buffered reading.
Instead of pushing one byte at a time to the array list for each
section, find the end index for each section, then alloc the arraylist
and copy the data into it all at once.
2026-01-05 20:13:24 -05:00
3342aa22ab
Update tests to work again
2026-01-05 18:26:43 -05:00
1d2af4a69a
Simplified queue access
...
Also correctly move resetting the task to the end instead of defer.
We don't want to reset the task in the case of an error, so shouldn't
use defer.
2026-01-05 13:56:40 -05:00
80d14f7303
Display help when there is no subcommand
2026-01-05 13:47:27 -05:00
e50d53ee7e
Add Payload type
...
stores short message buffers in a colocated array, overflowing to an
allocated slice when needed.
2026-01-05 10:34:31 -05:00
ca43a12b9b
Using separate queue for high throughput messages
2026-01-04 23:36:44 -05:00
69528a1b72
Probe for optimal network buffer size.
...
We want to match the underlying system socket buffer.
Filling this buffer minimizes the number of syscalls we do.
Larger would be a waste.
Also changed parser to use enums that more closely match the NATS
message types.
2026-01-04 20:57:31 -05:00
e81bcda920
holy moly goes way fast!!!
...
like 150 mbps now
2026-01-03 06:08:40 +00:00
fbc137e2b3
Kill dead code and use higher buffer
2026-01-03 06:02:47 +00:00
dcd09e2f10
cleanup imports
2026-01-03 05:54:14 +00:00
bd9829f684
Organize things
...
Making it easier to use the server as a library
2026-01-03 05:33:56 +00:00
a4ec798521
Fix parse errors, ownership errors.
2026-01-03 03:17:13 +00:00
9e32d014c2
Restructuring parser
...
Adding tests fore everything
2026-01-03 02:34:04 +00:00
f99b44fdb2
Fix double free
...
was freeing the wrong element before.
2026-01-03 02:33:12 +00:00
5a7d3caf9c
Subject validation
2026-01-02 23:57:40 +00:00
a21dbfe3bb
Remove unnecessary explicit enum
...
This can be computed (as it is now)
2026-01-02 23:31:16 +00:00
0f851a140d
Fix possible race condition
...
since the queue was being set in an async task and we were then calling send asserting that the queue was set, we could have triggered a panic.
didn't run into it but seemed likely to cause issues in the future.
also compute the buffer size for operators at comptime.
2026-01-02 23:13:54 +00:00
67908cf198
Move handshake from client to server
2026-01-02 22:37:54 +00:00
29e5b92ee0
Only check for ctrl+c every 10 ms
2026-01-02 20:50:57 +00:00
90b5b1548f
Add branch hints for high performance messaging.
2026-01-02 20:35:58 +00:00