Friday, April 17, 2026

Active Object Design Pattern - and a simple implementation using Julia - from Active Object paradigm of Symbian S60 to today's journey in Julia - a checkered career...

The Active Object Design Pattern is a concurrency pattern that decouples method invocation from method execution, allowing tasks to run asynchronously without blocking the caller.

At its core, an Active Object introduces a proxy that clients interact with. Instead of executing methods directly, the proxy places requests into a queue. A separate worker thread (or pool) processes these requests in the background. This creates a clean separation between what needs to be done and when/how it gets executed.

A typical Active Object system has four key components:

  • Proxy – exposes the interface to the client

  • Method Request – encapsulates a function call as an object or callable

  • Activation Queue – holds pending requests

  • Scheduler/Worker – executes requests asynchronously

This pattern is especially useful when:

  • You want to avoid blocking the main thread

  • You need controlled concurrency (e.g., limited worker threads)

  • You want to serialize access to shared resources safely

Here's a simple implementation of Active Object Design Pattern in Julia.


function ThreadedActiveObject(nworkers=4)
ch = Channel{Function}(32)
tasks = []

for _ in 1:nworkers
push!(tasks, Threads.@spawn begin
for job in ch
Base.invokelatest(job)
end
end)
end

return ch, tasks
end

function heavy_compute(n)
s = 0.0
for i in 1:n
s += sin(i) * cos(i)
end
println("Computed sum for $n = $s on thread $(Threads.threadid())")
end

ao, tasks = ThreadedActiveObject(4)

for i in 1:10
put!(ao, () -> heavy_compute(10^7 + i))
end

close(ao)
foreach(wait, tasks)


Sequence Diagram:



Mapping The Code to Active Object Components

Let’s reinterpret the code piece by piece.

Proxy (Client Interface)

put!(ao, () -> heavy_compute(10^7 + i))

This is the proxy layer.

Why?

  • The caller is not executing the method directly
  • Instead, it:
    • wraps the request as a function (closure)
    • submits it to a queue

In classic Active Object:

proxy.method_call() → enqueue request

In my code:

put!(ao, job_function)

So:

The Channel (ao) acts as the proxy interface

Activation Queue

ch = Channel{Function}(32)

This is the Activation Queue.

  • Holds pending method requests
  • Thread-safe
  • Decouples producer and consumer

Classic role:

Queue<Request>

My version:

Channel{Function}

Each Function = a method request object

Method Request

() -> heavy_compute(10^7 + i)

This is a Method Request object, just expressed as a closure.

In traditional OO:

class PrintTask : MethodRequest {
void execute() { ... }
}

In Julia:

() -> heavy_compute(10^7 + i)

Key idea:

  • Encapsulates:
    • what to do
    • data (i)
    • logic

Scheduler + Servant (Worker Threads)

Threads.@spawn begin
for job in ch
Base.invokelatest(job)
end
end

This block plays two roles:

Scheduler

for job in ch
  • Pulls requests from the queue
  • Decides execution order (FIFO here)

This is the scheduler

Servant

Base.invokelatest(job)
  • Actually executes the request

This is the servant

So each worker thread is:

[ Scheduler + Servant ]

Thread Pool (Multiple Active Objects Workers)

for _ in 1:nworkers
Threads.@spawn ...
end
  • Creates multiple workers
  • All consume from the same queue

This is a multi-threaded Active Object

Classic pattern often has:

  • 1 thread → 1 active object

My version:

N threads → shared activation queue

This is more like:

  • Active Object + Thread Pool hybrid

Lifecycle Control

Closing the queue

close(ao)
  • Signals: no more requests
  • Workers stop after finishing remaining jobs

Waiting for completion

foreach(wait, tasks)
  • Ensures all scheduled work completes
And here's my journey through the Symbian S60's Active Object paradigm - studied many years ago...


A story on my checkered software journey through the wilderness of concurrent programming - one important lesson - basic idea remains the same across platforms...



There’s a certain kind of journey that doesn’t show up on résumés—the kind that winds through late nights, cryptic bugs, half-working abstractions, and those rare moments when something finally clicks. My journey through concurrent programming has been exactly that: a long walk through a wilderness where the landscape keeps changing, but the underlying terrain remains strangely familiar.

I started in the era of VC++. Back then, concurrency felt mechanical—almost industrial. I didn’t “design” concurrent systems; I wrestled them into submission. WAIT_FOR_SINGLE_OBJECT, WAIT_FOR_MULTIPLE_OBJECTS—these weren’t just API calls to me; they were survival tools. I learned quickly that a missed signal or a mishandled handle could freeze everything into a silent deadlock. Threads were powerful, yes—but also unforgiving. I treated them with respect because I had no choice.

Then I encountered Symbian, and with it, the idea of Active Objects. That was a turning point for me. Instead of manually juggling threads, I began thinking in terms of events, schedulers, and requests. The system was still concurrent, but the chaos felt… organized. I wasn’t blocking threads anymore; I was orchestrating asynchronous flows. It felt like moving from brute force to something more deliberate.

It wasn’t effortless, though. The Active Object model demanded discipline. I had to understand request lifecycles, callbacks, and the implicit contract with the scheduler. But once it clicked, it changed how I saw concurrency—not just as parallel execution, but as controlled responsiveness.

Then came Android and Java. By that time, concurrency had evolved into something more structured. Executors, thread pools, futures, synchronized blocks—the tools were richer, the abstractions deeper. I could finally express intent more clearly: run this task, schedule that work, wait for these results. The raw edges of thread management were softened.

But I also realized something important: the old problems never really went away. Deadlocks still happened. Race conditions still crept in. Performance was still a careful balancing act. The tools had improved, but the responsibility was still mine.

And then I found Julia.

Julia felt different from the start. Lightweight tasks, channels, @async, Threads.@spawn—it was concurrency with a kind of fluidity I hadn’t experienced before. I could write code that looked almost sequential, yet behaved concurrently. Channels made communication feel natural again.

I remember watching tasks communicate over a channel, work flowing across threads, and thinking—this feels like everything I’ve learned, coming together. The low-level discipline from VC++, the event-driven thinking from Symbian, the structured abstractions from Java—they were all there, just distilled into something simpler and more expressive.

That’s when the most important lesson became clear to me.

Across all these platforms, languages, and paradigms—the surface keeps changing. APIs evolve. Syntax improves. Abstractions come and go.

But the core idea doesn’t change.

At its heart, concurrent programming has always been about a few simple truths:

  • Work can happen independently.

  • Coordination is harder than execution.

  • Communication is everything.

  • And timing… timing is where things either work beautifully or fall apart.

Whether I’m waiting on a kernel object, scheduling an active request, submitting tasks to an executor, or passing messages through a channel—I’m solving the same fundamental problem. I’m managing time, state, and interaction.

The wilderness hasn’t disappeared for me. I’ve just learned how to navigate it.

And maybe that’s what this journey really is—not moving from one technology to another, but moving from confusion to clarity. From fighting concurrency… to understanding it.

Thursday, April 16, 2026

Inter thread communication in Julia - Channel and Wait-Notify...

There are a few ways we can accomplish inter-thread communication in Julia. In this article, we will look into two ways - via channel and via wait & notify.

Via Channel...

Here's an implementation of the Producer-Consumer problem in Julia using two threads. The producer creates and puts it in a channel, whereas the consumer takes it from the channel. The producer and consumer run in two different threads.

using Base.Threads

ch = Channel{Int}(10)

@sync begin

# Producer (runs on a thread)
Threads.@spawn begin
for i in 1:5
println("Producing $i on thread $(threadid())")
put!(ch, i)
sleep(0.5)
end
close(ch)
end

# Consumer (runs on another thread)
Threads.@spawn begin
for val in ch
println("Consuming $val on thread $(threadid())")
end
end

end

Let's try to dissect the above code.

The Key Idea: Channels are Iterable Streams

In Julia, a Channel is not just a queue—it implements the iteration protocol.

So when we write:

for val in ch
    println("Consuming $val on thread $(threadid())")
end

this is conceptually equivalent to:

while true
    val = take!(ch)   # blocks if empty
    println("Consuming $val on thread $(threadid())")
end

…but with one crucial addition:

The loop automatically stops when the channel is closed.

What Actually Happens Internally

When Julia executes:

for val in ch

it translates roughly into:

state = iterate(ch)

while state !== nothing
    (val, next_state) = state
    println(...)
    state = iterate(ch, next_state)
end

For Channel, iterate(ch) is defined such that:

  • It internally calls take!(ch)

  • If data is available → returns (value, state)

  • If the channel is:

    • empty but open → it blocks (yields the task)

    • closed and empty → returns nothing → loop ends

Why This is Perfect for Concurrency

Let’s break down the behavior in your example:

Producer Thread

put!(ch, i)
  • Pushes data into the channel

  • If the channel buffer is full → producer blocks

Consumer Thread

for val in ch
  • If data is available → consumes immediately

  • If empty → consumer blocks (non-busy wait!)

  • If channel is closed → loop exits cleanly

Important: This is NOT Busy Waiting

A common mistake in other languages:

while(queue.empty()) { /* spin */ }

But Julia does this instead:

  • The consumer yields control

  • The scheduler runs another task

  • When put! happens → consumer is resumed

This is cooperative scheduling, not CPU spinning.

Why for val in ch is Better Than take! Loop

We  could write:

while true
    val = take!(ch)
    println(val)
end

But then we must manually handle termination:

  • How do we know when to stop?

  • We'd need a sentinel value or extra signaling

With:

for val in ch

we get:

  • Automatic blocking

  • Automatic wake-up

  • Automatic termination on close(ch)

  • Cleaner, declarative code

The Role of close(ch)

This line in our producer is critical:

close(ch)

Without it:

  • The consumer will wait forever

  • Because it assumes more data might come

With it:

  • The iteration ends naturally

  • for loop exits → task completes

Subtle but Important Detail

Even though we used:

Threads.@spawn

The channel itself is thread-safe, meaning:

  • Multiple producers/consumers can safely operate

  • Synchronization is handled internally

Final Insight

Consumer code:

for val in ch

is not just syntactic sugar—it encodes three things at once:

  1. Blocking synchronization (wait for data)

  2. Data flow semantics (consume stream)

  3. Termination protocol (stop on close)

That’s why it’s considered idiomatic Julia concurrency.

Via Event/Condition

This is pretty good for implementing signalling between threads.

Here's the code for such a system.

using Base.Threads

mtx = ReentrantLock()
cond = Base.GenericCondition(mtx) #bind lock + condition

@sync begin
Threads.@spawn begin
for i in 1:5
println(i)
sleep(1)
end

println("Now Waitin...")

lock(mtx) do
wait(cond) #correct lock
end

println("Resuming!")

for j in 6:10
println(j)
sleep(1)
end
end

sleep(10)

lock(mtx) do
notify(cond) #SAME lock
end
end

Core Idea

A Condition in Julia is a wait queue tied to a lock.

Threads can:

  • wait(cond) → sleep until signaled
  • notify(cond) → wake waiting thread(s)

What happens in the code?

  1. Prints numbers 1 → 5
  2. Acquires lock and calls:

    wait(cond)
  3. Internally:
    • Releases mtx
    • Goes to sleep
    • Gets queued on cond

The thread is now blocked without consuming CPU

Main Thread (Producer / Notifier)

sleep(10)

lock(mtx) do
notify(cond)
end

What happens:

  1. After 10 seconds, main thread:
    • Acquires the same lock (mtx)
    • Calls notify(cond)
  2. This:
    • Wakes the waiting thread
    • That thread re-acquires the lock
    • Continues execution

Execution Timeline

Worker Thread Main Thread
-------------- -------------
1 → 5 printed
Now Waiting...
(wait → sleep)

sleep(10)
notify(cond)

Resuming!
6 → 10 printed

Critical Rules

1. Lock must be held

Both must be inside:

lock(mtx) do ... end

Otherwise:

ConcurrencyViolationError("lock must be held")

Same lock everywhere

cond = GenericCondition(mtx)

👉 You must use this exact mtx for:

  • wait
  • notify

wait is atomic

When calling:

wait(cond)

Julia:

  1. Releases lock
  2. Sleeps
  3. On notify → wakes up
  4. Re-acquires lock

This avoids race conditions

Conceptual Model

Think of it like:

Condition = Lock + Queue of waiting threads
  • wait → join queue
  • notify → wake one (or all)

When to use this?

Use Condition variables when:

  • You need pure signaling
  • No data needs to be transferred

Use Channel when:

  • You need data + synchronization

Wednesday, April 15, 2026

@async in Julia is not parallelism - but @Thread.spawn is...

 The Core Idea

In Julia, @async gives you concurrency, not parallelism.

  • @async → runs multiple tasks, but on the same thread
  • Threads.@spawn → runs tasks on multiple CPU threads (true parallelism)

Demonstration 1

function task(name)
println("$name running on thread ", Threads.threadid())
sleep(1)
println("$name finished on thread ", Threads.threadid())
end

@sync begin
@async task("Task A")
@async task("Task B")
end

If we run the above code, we will get 

Task A running on thread 1

Task B running on thread 1

Task A finished on thread 1

Task B finished on thread 1

What is happening internally?

Julia uses cooperative scheduling for @async:

  • Tasks yield control (e.g., sleep, I/O)
  • Scheduler switches between them
  • But execution remains on one OS thread

As you can see from the output, both tasks are running on the same thread.

Demonstration 2

function task(name)
println("$name running on thread ", Threads.threadid())
sleep(1)
println("$name finished on thread ", Threads.threadid())
end

@sync begin
Threads.@spawn task("Task A")
Threads.@spawn task("Task B")
end

If we run the above code, we will get

Task B running on thread 3
Task A running on thread 4
Task B finished on thread 3
Task A finished on thread 4

As you can see from the output, two threads are running in parallel.

Summary


Feature @async Threads.@spawn
Threads used    Single thread    Multiple threads
Parallelism        No        Yes
Concurrency        Yes        Yes
Best forI/O, networking        CPU-heavy tasks
SchedulingCooperativePreemptive (OS threads)

Sunday, April 12, 2026

IPC - Incremental Potential Contact - implemented using Julia...

Scientists have started noticing Julia.

The word spreads.

 Not through a marketing campaign.

But seeing the Python-like ease of code writing, along with the speed of C++.

- Physicists liked that Julia felt like writing equations.
- Engineers liked that it handled performance without boilerplate.
- Researchers loved that they could:

  • Prototype quickly
  • Scale to HPC when needed
  • Avoid rewriting everything in another language

The two-language problem vanished.

A prototype can be scaled to a production stage without much effort.

I am loving it.

I studied Incremental Potential Contact (IPC) some time ago and implemented it in Python. Today I used Julia to explore IPC.

Here's the visualization.

And here's the source code...

using LinearAlgebra
using Plots
using Printf

# -------------------------------
# IPC-like Force (Barrier-inspired)
# -------------------------------
function ipc_force(p0::Vector{Float64}, p1::Vector{Float64},
kappa::Float64, d_hat::Float64)

diff = p1 - p0
d = norm(diff)

eps = 1e-6
d_safe = max(d, eps)

if d >= d_hat
return zeros(3), zeros(3)
end

# Barrier-like force
f_mag = kappa * (1 / d_safe - 1 / d_hat)

n = diff / d_safe

f0 = f_mag * n
f1 = -f0

return f0, f1
end


# -------------------------------
# Simulation Core
# -------------------------------
function simulate(; dt=0.002, steps=500,
kappa=1.0, d_hat=0.02, mass=1.0)

p0 = [-0.02, 0.0, 0.0]
p1 = [ 0.02, 0.0, 0.0]

# 🔥 stronger motion
v0 = [0.5, 0.0, 0.0]
v1 = [-0.5, 0.0, 0.0]

traj0 = Vector{Vector{Float64}}()
traj1 = Vector{Vector{Float64}}()
distances = Float64[]

push!(traj0, copy(p0))
push!(traj1, copy(p1))
push!(distances, norm(p1 - p0))

for step in 1:steps
f0, f1 = ipc_force(p0, p1, kappa, d_hat)

v0 += dt * f0 / mass
v1 += dt * f1 / mass

# ✅ ADD DAMPING HERE
damping = 0.99
v0 *= damping
v1 *= damping

p0 += dt * v0
p1 += dt * v1

d = norm(p1 - p0)

@printf("Step %d: distance = %.6f\n", step, d)

push!(traj0, copy(p0))
push!(traj1, copy(p1))
push!(distances, d)
end

return traj0, traj1, distances
end

# -------------------------------
# Visualization (Animation + Graph)
# -------------------------------
function visualize(traj0, traj1, distances; dt=0.0002,
filename="ipc_simulation.gif")

time = collect(0:dt:dt*(length(distances)-1))

anim = @animate for i in 1:length(traj0)

pos0 = traj0[i]
pos1 = traj1[i]

# ---- LEFT: particle motion ----
p1_plot = scatter(
[pos0[1], pos1[1]],
[pos0[2], pos1[2]],
xlim=(-0.03, 0.03),
ylim=(-0.02, 0.02),
markersize=8,
label="Particles",
title="IPC Collision Avoidance"
)

plot!(
[pos0[1], pos1[1]],
[pos0[2], pos1[2]],
label="distance"
)

# ---- RIGHT: distance vs time ----
p2_plot = plot(
time[1:i],
distances[1:i],
xlabel="Time",
ylabel="Distance",
title="Distance vs Time",
label="d(t)"
)

hline!([0.02], linestyle=:dash, label="d_hat")

# ---- Combine both ----
plot(p1_plot, p2_plot, layout=(1,2), size=(900,400))
end

gif(anim, filename, fps=30)
end


# -------------------------------
# Main
# -------------------------------
function main()
println("Running IPC simulation with diagnostics...")

traj0, traj1, distances = simulate()

println("Generating animation with distance plot...")

visualize(traj0, traj1, distances)

println("Done. Check ipc_simulation.gif")
end


# Run
main()

Here's my earlier investigation of IPC (using Python)...





Happy code digging...

Wednesday, April 8, 2026

Half Sync - Half Async design pattern implemented using Julia...

My deep study of the Android AsyncTask framework many years ago - a perfect example of this design pattern by bright Google engineers...





The Julia Code


struct Task
id::Int
payload::Float64
end

function worker(id, ch::Channel)
for task in ch
println("Worker $id processing Task $(task.id)")
result = sum(sin.(1:10^6 .* task.payload))
println("Worker $id finished Task $(task.id)")
end
println("Worker $id shutting down")
end

function async_producer(ch::Channel, n::Int)
for i in 1:n
sleep(rand())
println("Producing Task $i")
put!(ch, Task(i, rand()))
end
close(ch)
end

function run_system(num_tasks=10, num_workers=4)
ch = Channel{Task}(32)

@sync begin
# Producer runs as async task tracked by @sync
@async async_producer(ch, num_tasks)

# Workers
for i in 1:num_workers
Threads.@spawn worker(i, ch)
end
end
end

run_system(20, Threads.nthreads())


1. The Pattern Refresher

Half-Sync/Half-Async splits a system into:

🔹 Async Layer 

  • Non-blocking

  • Event-driven

  • Produces work

🔹 Sync Layer 

  • Blocking / CPU-bound

  • Deterministic execution

  • Processes work

🔹 Boundary 

  • A queue (here: Channel)

  • Decouples the two layers

2. Mapping The Code to the Pattern

🔸 (A) Boundary → Channel

ch = Channel{Task}(32)

This is the core of the pattern.

👉 It acts as:

  • A thread-safe queue

  • A decoupling buffer

  • A synchronization boundary

Interpretation:

“Async world hands off work to Sync world through a controlled interface.”

(B) Async Layer → async_producer

@async async_producer(ch, num_tasks)

Inside:

for i in 1:n
    sleep(rand())
    put!(ch, Task(i, rand()))
end
close(ch)

Why this is “Async”:

  • @async → cooperative scheduling (non-blocking)

  • sleep(rand()) → simulates unpredictable external events

  • put! → hands off work without doing computation

Conceptual role:

“I don’t process. I just observe and emit events.”

(C) Sync Layer → worker

Threads.@spawn worker(i, ch)

Inside:

for task in ch
    result = sum(sin.(1:10^6 .* task.payload))
end

This pasrt needs attention. Read the following texts very carefully - because this is the
beauti of Julia.

The Key Idea

for task in ch
...
end

👉 This is syntactic sugar over repeated take! calls.

What for task in ch Actually Means

Internally, Julia treats a Channel as an iterator.

This loop:

for task in ch
process(task)
end

is roughly equivalent to:

while true
task = take!(ch) # <-- blocking call
process(task)
end

with one important addition:

  • It automatically stops when the channel is closed

Why It Blocks

Because take!(ch) is blocking, and the loop uses it internally.

What happens step-by-step:

  1. Worker reaches:

    for task in ch
  2. Julia internally does:

    task = take!(ch)
  3. If:
    • Channel has data → continues immediately
    • Channel is empty → worker thread sleeps (blocks)

👉 That’s the blocking behavior

What About Channel Closure?

This is the elegant part.

When you do:

close(ch)

Then:

  • take!(ch) continues to return remaining items
  • After channel is empty → iteration stops automatically

So this:

for task in ch

is equivalent to:


while true
if isopen(ch) || isready(ch)
task = take!(ch)
process(task)
else
break
end
end

Conceptual role:

“Give me work. I will process it fully and deterministically.”

Why This Matters in Half-Sync/Half-Async

This line:

for task in ch

is doing three jobs at once:

1. Blocking wait (Sync behavior)

  • Waits for work → like a worker thread

2. Queue consumption

  • Pulls tasks from async layer

3. Shutdown coordination

  • Stops automatically when async layer signals completion (close)

 

(D) Coordination → @sync

@sync begin
    @async async_producer(...)
    Threads.@spawn worker(...)
end

This is not part of the original pattern per se, but in Julia it ensures:

  • The system behaves like a long-running service

  • Main thread waits for both layers

3. End-to-End Flow (Pattern in Action)

Step-by-step:

  1. Async Layer wakes up

    • Generates a task (like a sensor or network event)

  2. Task is enqueued

    put!(ch, Task(...))
    
  3. Sync Layer pulls work

    task = take!(ch)
    
  4. Processing happens

    • Heavy computation (sin, sum, etc.)

  5. Repeat until the channel closes

4. Why This is Half-Sync/Half-Async (Not Just Threads)

Because of the strict separation of concerns:

ConcernWhere handled
Event timingAsync layer
Work queueingChannel
ExecutionSync layer

👉 The producer never processes
👉 The worker never generates events

That separation is the essence of the pattern

5. Key Properties The Code Achieves

Decoupling

  • Producer speed ≠ Worker speed

  • Buffered via Channel(32)

Backpressure

  • If workers are slow → channel fills → put! blocks

  • Natural flow control

Scalability

Threads.@spawn worker(i, ch)
  • Increase workers → parallelism increases

Clean Shutdown

close(ch)
  • Workers exit automatically via:

for task in ch

6. Subtle but Deep Insight

The system is not just parallel — it is:

A streaming system with a controlled execution boundary

This is exactly how:

  • High-performance servers

  • Simulation engines

  • Data pipelines

are designed internally.