Coming from learning and using Python and TypeScript to build backend solutions, transform data, and automate processes (including cloud), I came across Rust--which Jack Dorsey, the former CEO of Twitter, called "the perfect programming lanaguage."
In this repo are tinkle files from my learnings and lab files of learning Rust via various sources, namely:
- The Rust Book also known as "The Book". (primary resource)
- Programming Rust, 2nd Edition, by Jim Blandy, Jason Orendoff, and Leonara F. S. Tindall
- The Little Book on Rust Macros
- The Rust Reference
- The Rustonomicon (The Dark Arts of Unsafe Rust)
For a more interactive learning experience, I used the interactive version of the Rust Book, featuring: quizzes, highlighting with notes, visualizations, and more. You can find it here: https://rust-book.cs.brown.edu/. In my opinion, the C.S.Brown University "Rust Book" (interactive) version, officially called the Rust Book experiment (a fork of the "Rust Book"), is the best study version available (better than the default "Rust Book" version), with best-in-class visual code blocks to learn Rust's ownership model, lifetime annotations, and more. This version was forked by the C.S. Brown University research team, Cognitive Engineering Labs. They have a good number of Rust devtools. Check them out at: https://github.com/cognitive-engineering-lab/
Files from The Rust Book are stored on the top-level directory or root directory, while files from Programming Rust are stored in the 00-programming_rust_book directory.
Project files are stored in the projects/ directory. The naming convention follows executed projects from the chapters of the Book. Here are some highlight projects I worked on:
I built a multi-threaded web server with a sized threadpool and worker threads implementation that limits the extent of a DoS (Denial of Service) attack, thereby efficiently utlizing compute resources.
- Project directory:
projects/21/multithreaded_web_server/hello/ - Single-threaded binary:
projects/21/multithreaded_web_server/hello/src/main.rs - Multi-threaded binary:
projects/21/multithreaded_web_server/hello/src/bin/hello_mt.rs
To run the single-threaded web server binary, make sure your working directory is at the project directory, then run:
cargo run # this builds and runs 'src/main.rs'The single-threaded web server listens on a TCP connection, bound on 127.0.0.1:7878. When a connection request from the browser comes in, the server reads the request line of the HTTP protocol message. In this simple case, I handled the home route / only, which responds to the request with HTML in the hello.html file. Any other route (e.g. /something-else) will respond to the request with HTML in the 404.html file.
The limitation of a single-threaded web server is if a request comes in from the browser which triggers a long-running process in the server, it will block any other simultaneous request to browser. Any following request will wait for the long-running process to finish and return, as if processed in synchrony, before it itself processes and returns. The solution to this is building a multi-threaded web server that handles requests concurrently (also known as asynchronously, or in parallel "thread-based").
In the multi-threaded web server, I simulate a long-running process that takes 5 seconds to execute. In the single-threaded binary, subsequent requests will wait for 5 seconds before they can execute and return, but in the multi-threaded binary one worker thread handles the long-running process while other worker threads respond to subsequent requests. This creates a snappy experience for the web user on the browser visiting http:localhost:7878, where subsequent requests they make on the browser resolve almost immediately without needing to wait for the request to a long-running process to finish.
Below is a test via the CLI that proves concurrent response of the multi-threaded web server.
- In the terminal, make sure your working directory is at the project directory, then run:
cargo run --bin hello_mt- In another terminal process, run:
for i in {1..2}; do
curl -s -o /dev/null http://localhost:7878/sleep &
curl -s -o /dev/null http://localhost:7878/sleep &
for j in {1..5}; do
curl -s -o /dev/null http://localhost:7878/ &
done
done
waitAt 1, we start the multi-threaded web server. After the server is up and running, in a another terminal process (at 2), we make 14 simultaneous requests to the server. Four of those requests is to a long-running process (at route /sleep) that runs for 5 seconds. Despite requests to a long-running process, subsequent requests are handled and returned immediately.
You can see in the image below how the server's thread pool implementation pushes subsequent requests to an available worker thread to handle, therefore handling every request concurrently.
If you visit the home route / of the web server, this is the HTML content that is served from the hello.html file:
As the title said, I built a declarative macro (also known as a macro by example macro, or macro rules macro) that returns the any sequence including the Fibonacci sequence in milliseconds.
- Project directory:
projects/20/macros/little_book/mbe/fib_recurrence - Source file:
projects/20/macros/little_book/mbe/fib_recurrence/src/main.rs
Running the source file to return a fibonacci sequence via the recurrence!() macros to the 10th sequence runs in 0.23 seconds:
Running the source file to return a fibonacci sequence via the recurrence!() macros to the 50th sequence runs in 0.22 seconds:




