Extensions
Connecting to your server from other computers
In order to allow other computers to connect to your server, we need to bind to our local computers public address.
To find this, we'll bring in the local_ip_address
crate as a dependency, which does some surprisingly nontrivial work to help us out.
Run the following while inside of your server-student1-student2
repo directory:
cargo add local_ip_address
Then, we want to use it to find our local public address, print it out so we can share it with others, and finally bind our server to it.
To do this add the following to your main
function:
Filename: src/main.rs
fn main() -> io::Result<()> {
let ip = local_ip_address::local_ip().unwrap();
println!("IP: {ip:?}");
let mut server = Server::new(&format!("{ip}:6379"))?;
// event loop logic here...
}
Running your server now with cargo run
will then print out an ip address.
Before we can do this, make sure you pull the latest changes from the chat-client
repo with git pull
, and then run the client with:
cargo run -- <the ip address of the server>
For example, if the server printed
IP: 130.58.68.175
you would run the client withcargo run -- 130.58.68.175
.
Feature ideas
Direct messaging
Add support for direct messaging other users.
For example, if I'm Quinn
and I want to message William
, I could write:
/dm William u tryna rust rn?
And on my screen I should see
you -> William: u tryna rust rn?
And he should see:
Quinn -> you: u tryna rust rn?
This would involve some work modifying the message queue so that messages could also carry information about who to get sent to.
Note that in the client code, it will concatenate messages for you.
So if the server wanted William to see Quinn -> you: the message
, it could send:
Slice(&[
Simple("MSG"),
Slice(&[
// a bunch of frames for each part
// that will then be concatenated by the client
])
])
Info commands
Support commands so clients can write things like /info
and get information sent to them and only them.
For example, you could send them a list of the currently connected clients, or how long each person has been connected for, or who has sent the most messages.
Chat rooms
Building off the idea for special commands, add support for chat rooms in some way. Allow users to join and leave chat rooms, maybe even create their own.
For this, you'll definitely need a way to send messages to specific people instead of always broadcasting to everyone.
Security
Spam protection
Track how frequently each connection is sending messages and send them warnings or disconnect them if they spam too much.
Username checking
Don't allow for clients to join with the same name as someone else, or add filters on what names are allowed. Right now, the client will allow just about anything through (including emojis, special symbols, etc.).
Parallelism
Having everything in one event loop is straightforward, but can become a big performance bottleneck running everything on a single thread.
There are two key ingredients when it comes to concurrency in Rust:
Sometimes, we want to send multiple different kinds of messages across a channel, and enums are perfectly suited for this use case because each variant can represent a different kind of message.
For example, we could have an Update
enum with variants representing a new connection, a disconnection, or a message sent, and then have a single channel that sends Update
values.
With this out of the way, there are (at least) two levels of extensions we can do with concurrency for this project:
- Writer thread
- Recycling allocations
Writer thread
This will include a from the main thread to the writer thread.
Here the main loop will be responsible for checking for new connections and checking for new frames.
If a new connection is received, then it is sent to the writer thread.
While there are new connections that can be received (we expect 0 or 1 at any given moment typically), they are also sent to the writer thread.
Then, it loops through all existing connections and tries to receive a message.
If any message is received, then it is handled and the response is encoded in a Vec<u8>
and sent to the writer thread as well.
In the writer thread, it will first pull all messages from the connect/disconnect channel and update its own list of live connections accordingly. Then, it will pull messages from the message channel and send them to all the connected channels respectively, and repeat this process again from the start.
This strategy should only involve two major changes to your code: the enqueue_message
function should now be sending Vec<u8>
s across a channel instead of pushing to a VecDeque
, and you will need to spawn the writer thread (and define its functionality) in main
.
Recycling allocation
Instead of having an ordinary channel sending Vec<u8>
values to the writer thread, use the thingbuf
crate, which offers an MPSC channel that also acts as an object pool.
This way, allocations can be reused.
This should only involve changing the enqueue_message
method.