Why did I finally choose Server-Sent Events to implement real-time status?
The author explores technical approaches to implement a real-time status feature on their personal homepage, displaying their current coding or music-listening activities. They evaluate three communication methods: Polling, WebSocket, and Server-Sent Events (SSE). Polling is simple but inefficient and resource-heavy. WebSocket offers low-latency, bi-directional communication but is overkill for a one-way broadcast and faces potential CDN restrictions. Ultimately, the author selects SSE as the ideal solution. SSE provides a lightweight, unidirectional stream from server to client via standard HTTP with built-in automatic reconnection. While acknowledging SSE's limitations—such as text-only data, browser connection limits (largely solved by HTTP/2), basic error handling, and lack of guaranteed delivery—the author concludes it is the most practical and efficient choice for a low-risk, simple status broadcast.
When I first designed my personal homepage, I dug a hole for myself—I had to have some kind of "real-time status" feature to show what I'm doing at the moment, whether I'm writing code, listening to music, or playing games. 😵💫 It sounds like a fancy trick, but without a doubt, it really is a fancy trick.
This persistence comes from the foundation I laid previously when developing the status reporting software for 我的动态 in Mix-Space's Shiro. I am currently still maintaining AlienFamilyHub/Kizuna: 基于 Tauri 的动态信息上报程序 to implement status reporting. 🫣
While surfing the internet, I stumbled upon some really fun thingsH2#
FirstH3#
The coding time tracking plugin codetime returns the editor's current editing information to the frontend via API, providing real-time coding status.
A return result like this is very detailed, and can even be accurate down to the file path and Git repository:
{
"id": number,
"uid": number,
"eventTime": number,
"language": "typescript",
"project": "Kizuna",
"relativeFile": "src/stores/eventStore.ts",
"absoluteFile": "c:\\Users\\tianx\\Desktop\\Kizuna\\src\\stores\\eventStore.ts",
"editor": "VSCode",
"platform": "Windows 11",
"gitOrigin": "https://github.com/AlienFamilyHub/Kizuna.git",
"gitBranch": "master"
}Through this API, I can get what the user is currently editing, what platform they are writing on, what editor they are using, and the specific project or even file they are working on.
SecondH3#
NetEase Cloud Music's "Listening Status" is one of the elements I wanted to display the most. I figured if it could achieve multi-user synchronization, there must be an API available to get the playback status.
Although it does not have a public API, there are always experts who perform packet capture and reverse engineering to make the relevant content available for developers.
In the project XiaoMengXinX/Music163Api-Go: 网易云音乐 API Golang 实现, I found the detailed implementation of this feature, which could eventually be wrapped into a form callable by a single file. Thus ProcessReporterWingo/core/NcmNowPlay/main.go at master · TNXG/ProcessReporterWingo was born.
So I wondered, could these be put into the blogger's status?H2#
The answer is of course yes, but what method should be used to transmit this information?
Polling (Knock, knock—any new messages?)H3#
At first, I used the most plain method—polling. The frontend requests the API every few seconds, asking the server: "Got any new updates over there?"
This method is the simplest, but its drawbacks are also obvious (AI keeps telling me it wastes bandwidth. While that is indeed the main issue with polling, who is still using metered connections to browse your website nowadays (crossed out), but if it can be optimized, it should be optimized
It pollutes the "Network" tab in the console and causes a waste of network resources. Moreover, even when there have been no new messages for a long time, it keeps banging on the door every five seconds, which is touchingly enthusiastic, but it doesn't know how to be considerate of the server or the developer's mouse scroll wheel (just kidding).
Timeliness is also an issue; when your status updates, it might not know right away, because "I just finished polling right before you changed, see you next time."
Some might say, why not just shorten the polling interval? This brings up another problem with polling: if the interval is too short, the server pressure will be greater, and most requests will be invalid—"no new messages".
If the polling interval is too long, real-time performance will be greatly compromised. It's an irreconcilable contradiction.
WebSocket (Let's establish a persistent connection!)H3#
Since polling is so troublesome, why don't we just establish a persistent connection? WebSocket was born for this.
WebSocket is a protocol that provides full-duplex communication over a single TCP connection.
It allows the server to actively push data to the client, so that when the server has new messages, it can push them to the client immediately without the client constantly asking. It also allows the client to actively report its current data and operating status to the server.
Everyone knows the benefits of WebSocket: bidirectional communication, low latency, and strong real-time performance. It is the preferred choice for scenarios like IM and online collaborative editing. Compared to polling, it's like using a sledgehammer.
And exactly because it's a sledgehammer, it's a bit overkill for my simple "real-time status" feature, a classic case of "using a sledgehammer to crack a nut".
Bidirectional communication is unnecessary: My "real-time status" is actually one-way; the backend tells the frontend the status and that's it. The frontend will not tell the backend "I saw it" in return. WebSocket is like buying a walkie-talkie but only using it to listen to broadcasts.
Connection management is more complex: Logics like keeping the connection alive, heartbeat packets, and disconnection reconnection clearly require more code than SSE, leading to higher maintenance costs.
Resource consumption is relatively high: The browser has to maintain a persistent connection, and the server side also has to hold the socket continuously. When there are many sessions, the pressure is not small.
To be honest, I just want to tell visitors that I am writing TypeScript, not open a chat room. Furthermore, WebSocket has a practical problem—
Many CDN providers do not have friendly support for it: Due to its full-duplex communication feature, the WebSocket protocol can theoretically be used to build proxies and other "special purposes". This causes some smaller cloud vendors to directly disable support for it.
In contrast, SSE is based on pure HTTP protocol, and the data flow is clear (only server to client), which greatly reduces the possibility of abuse. Enterprise firewalls and security devices typically perform deep packet inspection on WebSocket traffic, whereas SSE, as a standard HTTP stream, is usually treated as normal web traffic. This is also why in some special network environments, WebSocket connections are prone to interruption, while SSE can work stably.
Server-Sent Events (Hey, I've got new messages for you!)H3#
Finally, it's time for our protagonist SSE to take the stage! It acts like a one-way broadcasting station; the server can push messages to the client at any time, and the client just needs to listen obediently.
The way SSE works is extremely simple:
- The client initiates a normal HTTP request
- The server keeps the connection open and sets
Content-Type: text/event-stream - When there are new messages, the server writes data into this connection
// 前端代码简单到令人发指
const eventSource = new EventSource('/api/status');
eventSource.onmessage = (event) => {
console.log('收到新动态啦!', event.data);
};It's like installing a radio in the browser; once tuned to the channel, it keeps listening to the server's "broadcast". And this "radio" is particularly thoughtful; if the signal drops (e.g., network jitter), it will automatically reconnect without me having to worry at all. However, to prevent the connection from being mistakenly closed by network devices as idle (although it would automatically reconnect even if closed), I let the server periodically send comment messages starting with a colon to keep the connection alive. 🎵
Compared to WebSocket's "using a cannon to shoot a mosquito", SSE is practically tailor-made for my one-way push scenario. It doesn't require special protocol support, doesn't need to handle complex connection states, and even disconnection reconnection is automatically handled by the browser for me.
And the best part is, it's so worry-free:
- No need to handle disconnection and reconnection (built into the browser)
- No need to worry about firewall issues (standard HTTP protocol)
- Much less code than WebSocket (a gospel for lazy people)
One of the few "drawbacks" might be that it's one-way—but this is exactly what I want! It's not like I want to chat with visitors, I just want to tell them: "Oi, I just wrote a bug in VS Code! 🐛"
SSE doesn't steal the show or drag on with nonsense; it just quietly waits for the backend to "have something to say" and then delivers it to the frontend. For my project that only wants to display the "current status", it is just right, exactly enough, and fit for purpose.
So in the end, I chose SSE to implement the real-time status feature. It's like a good friend who won't chatter endlessly; when you want to know what I'm doing, it will tell you right away, but it will never ask "did you see it?"—because it simply can't ask. 😆
If you are also interested in SSE and want to dive deeper or practice this technology, I recommend reading the following materials:
These tutorials provide more detailed technical implementation details and best practices, which I believe can help you better understand and apply SSE.
Of course, there are also some things to be aware of!H2#
Characteristics of One-Way Data FlowH3#
This can be considered a disadvantage of SSE, but also an advantage.
The one-way nature of its data guarantees its main characteristic of being lightweight, but it also results in its inability to handle complex situations.
Data Type LimitationsH3#
SSE only supports plain text data and cannot transmit binary data.
For scenarios involving multimedia content such as image, audio, and video streams, SSE is not quite suitable.
You cannot directly transmit real-time video or audio streams via SSE—however, this limitation can actually be bypassed using some clever methods, such as converting image data into text format via encoding (like Base64 encoding) and then sending it via SSE.
However, although this approach can transmit images, it is definitely not an efficient method.
Sometimes, strung together, images might look like a video, just transmitted much slower. 🎥 (Japanese programmer.jpg (laughs)
Browser Concurrency LimitsH3#
When not used over HTTP/2, SSE is subject to maximum connection limits, which is particularly troublesome when opening multiple tabs, because the limit is per browser and is set to a very low number: 6.
This issue is marked as "Won't Fix" in Chrome and Firefox.
However, this limit is per browser + domain, so this means you can open 6 SSE connections to www.example1.com across all tabs, and 6 SSE connections to www.example2.com. (From Stackoverflow).
Fortunately, modern browsers generally support the HTTP/2 protocol. In HTTP/2, the limitation on the number of connections is greatly improved because HTTP/2 supports multiplexing.
In HTTP/1.x, a TCP connection can only handle one request at a time. This means that for every request you send, you have to establish a new connection, leading to resource waste and slower page loading. It's like going to the supermarket to shop, and having to line up to enter one by one every time; the efficiency is super low.
And HTTP/2 solves this problem by using multiplexing, allowing multiple requests and responses to be processed simultaneously on the same TCP connection. This is like entering the supermarket and going straight to multiple shelves to grab things at any time without queueing. 🍏🥖
In short, HTTP/2 allows you to handle more SSE streams with fewer TCP connections, avoiding the troubles caused by connection limits. 😎
But it should be noted that this does not mean HTTP/2 is a panacea; if your application still requires high-frequency or low-latency bidirectional communication (such as online chatting, multiplayer games, etc.), then you might still need to consider other protocols like WebSocket to meet more complex requirements.
Limited Error Handling and Reconnection MechanismsH3#
SSE comes with an automatic reconnection mechanism, which does help you handle some network disconnection issues, but its error handling and connection recovery mechanisms are relatively basic.
If you encounter an unstable network environment, SSE might perform slightly unstably, requiring you to manually handle some advanced error scenarios, such as data loss, retry logic, the timing of disconnection reconnection, and so on.
In other words, when the network is abnormal, SSE will be very "lazy", just simply reconnecting, and won't be very "smart" to judge under what circumstances it should restore the connection immediately.
Performance and Server PressureH3#
SSE constantly pushes data to the client via the HTTP protocol, which means that after each successful client connection, the server must keep the connection open to push data continuously.
When there are many client connections or a large amount of data, the pressure on the server will be quite significant, and performance may degrade. Especially in scenarios requiring frequent data updates, SSE might not be the best choice.
In addition, the data transmission speed of SSE is relatively slow; for applications with extremely high real-time requirements, WebSocket or other protocols might need to be considered to improve efficiency.
Message Reliability IssuesH3#
The one-way data flow characteristic of SSE makes the data received by the client an unreliable streaming data, which means that data loss or misalignment may occur during message transmission.
For example, if a push message is lost during transmission, the client will not actively request a retransmission, but will skip it directly. This means that your status information may lose a portion under some extreme conditions.
This is not very suitable for certain highly important data transmission scenarios (like systems involving money or state synchronization). For a "low-risk" application like a personal homepage, losing some minor status updates is fine, but for high-frequency trading or real-time collaboration systems, reliability becomes especially important.
ConclusionH2#
If you find any content errors or unclear descriptions during reading, please point them out promptly! 
My original intention for writing this article was to clearly explain the technical details of data transmission regarding the "blogger status" on my personal homepage, and everyone is welcome to communicate and discuss. You can also post the problems and insights you encountered in actual development in the comments section! 🤓
Also, I didn't use the previous naming convention for this article's slug, which can be considered a new beginning! That's all.