A small Gradle/Java project that runs repeatable concurrency benchmarks on six implementations of the classic Dining Philosophers problem. Each implementation (“v1” … “v6”) represents a different synchronization strategy, and the runner collects timing metrics and exports results to CSV for analysis.
Each run simulates N philosophers competing for N forks for a fixed duration. The benchmark tracks:
- Meals eaten (throughput)
- Total wait time accumulated across philosophers
- Average wait time per meal (global)
- Average wait time per philosopher per meal (fairness signal)
Results are printed to stdout and appended to a CSV file in results/.
The versions are listed in org.util.Versions and implemented in:
src/main/java/org/v1src/main/java/org/v2src/main/java/org/v3src/main/java/org/v4src/main/java/org/v5src/main/java/org/v6
- Uses plain Java monitors: nested
synchronized(leftFork)thensynchronized(rightFork). - Deadlock is possible (classic circular wait).
- Each fork has a
Semaphore(1)and afreeflag. - Philosophers loop using
tryAcquire()on both semaphores, then check both forks are free before “locking”. - Can reduce deadlock likelihood, but may still show unfairness/starvation depending on scheduling/timing.
- Breaks symmetry by changing acquisition order:
- even philosophers pick forks in one order
- odd philosophers pick forks in the reverse order
- This removes the circular wait condition and is a common deadlock-avoidance approach.
- Each iteration randomly chooses whether to pick up the left or right fork first.
- Randomization reduces the chance of persistent lock-step behavior.
- A global
Semaphore(n - 1)limits the number of philosophers allowed to attempt eating concurrently. - Inside that permission, philosophers still synchronize on left then right fork.
- This is a classic deadlock prevention technique.
- Also uses a global
Semaphore(n - 1), but attemptstryAcquire()first. - If
tryAcquire()succeeds: locks left then right. - If it fails: locks right then left (fallback acquisition order).
- Mixes global limiting with asymmetric ordering behavior.
Benchmark settings are in org.util.Config:
N = 100philosophersTEST_TIME = 10000msEAT_TIME = 500msTHINK_TIME = 200msREPETITIONS = 10runs per executionRAND_EAT_THINK_TIMES = true(randomizes eat/think sleep up to the configured max)- Output CSV:
results/N_<N>.csv(example:results/N_100.csv) DEBUG_PRINT = false
Each repetition appends one line to the CSV via org.util.TestToCsv, roughly:
- version, n, testTime, totalWaitTime, avgWaitTime, eaten, then one average value per philosopher
This makes it easy to compare strategies across runs (and across different values of N).
src/main/java/org/v1..v6/— benchmark implementations (each contains aMain)src/main/java/org/util/— shared utilities (config, results model, CSV exporter, etc.)results/— CSV output directory
./gradlew build
./gradlew testRunning: each version has its own
mainmethod (e.g.org.v1.Main,org.v2.Main