Wednesday, April 8, 2026

Half Sync - Half Async design pattern implemented using Julia...

My deep study of the Android AsyncTask framework many years ago - a perfect example of this design pattern by bright Google engineers...





The Julia Code


struct Task
id::Int
payload::Float64
end

function worker(id, ch::Channel)
for task in ch
println("Worker $id processing Task $(task.id)")
result = sum(sin.(1:10^6 .* task.payload))
println("Worker $id finished Task $(task.id)")
end
println("Worker $id shutting down")
end

function async_producer(ch::Channel, n::Int)
for i in 1:n
sleep(rand())
println("Producing Task $i")
put!(ch, Task(i, rand()))
end
close(ch)
end

function run_system(num_tasks=10, num_workers=4)
ch = Channel{Task}(32)

@sync begin
# Producer runs as async task tracked by @sync
@async async_producer(ch, num_tasks)

# Workers
for i in 1:num_workers
Threads.@spawn worker(i, ch)
end
end
end

run_system(20, Threads.nthreads())


1. The Pattern Refresher

Half-Sync/Half-Async splits a system into:

🔹 Async Layer 

  • Non-blocking

  • Event-driven

  • Produces work

🔹 Sync Layer 

  • Blocking / CPU-bound

  • Deterministic execution

  • Processes work

🔹 Boundary 

  • A queue (here: Channel)

  • Decouples the two layers

2. Mapping The Code to the Pattern

🔸 (A) Boundary → Channel

ch = Channel{Task}(32)

This is the core of the pattern.

👉 It acts as:

  • A thread-safe queue

  • A decoupling buffer

  • A synchronization boundary

Interpretation:

“Async world hands off work to Sync world through a controlled interface.”

(B) Async Layer → async_producer

@async async_producer(ch, num_tasks)

Inside:

for i in 1:n
    sleep(rand())
    put!(ch, Task(i, rand()))
end
close(ch)

Why this is “Async”:

  • @async → cooperative scheduling (non-blocking)

  • sleep(rand()) → simulates unpredictable external events

  • put! → hands off work without doing computation

Conceptual role:

“I don’t process. I just observe and emit events.”

(C) Sync Layer → worker

Threads.@spawn worker(i, ch)

Inside:

for task in ch
    result = sum(sin.(1:10^6 .* task.payload))
end

Why this is “Sync”:

  • take! (via for task in ch) → blocking

  • CPU-heavy computation

  • Runs on real OS threads

Conceptual role:

“Give me work. I will process it fully and deterministically.”

(D) Coordination → @sync

@sync begin
    @async async_producer(...)
    Threads.@spawn worker(...)
end

This is not part of the original pattern per se, but in Julia it ensures:

  • The system behaves like a long-running service

  • Main thread waits for both layers

3. End-to-End Flow (Pattern in Action)

Step-by-step:

  1. Async Layer wakes up

    • Generates a task (like a sensor or network event)

  2. Task is enqueued

    put!(ch, Task(...))
    
  3. Sync Layer pulls work

    task = take!(ch)
    
  4. Processing happens

    • Heavy computation (sin, sum, etc.)

  5. Repeat until channel closes

4. Why This is Half-Sync/Half-Async (Not Just Threads)

Because of strict separation of concerns:

ConcernWhere handled
Event timingAsync layer
Work queueingChannel
ExecutionSync layer

👉 The producer never processes
👉 The worker never generates events

That separation is the essence of the pattern

5. Key Properties The Code Achieves

Decoupling

  • Producer speed ≠ Worker speed

  • Buffered via Channel(32)

Backpressure

  • If workers are slow → channel fills → put! blocks

  • Natural flow control

Scalability

Threads.@spawn worker(i, ch)
  • Increase workers → parallelism increases

Clean Shutdown

close(ch)
  • Workers exit automatically via:

for task in ch

6. Subtle but Deep Insight

The system is not just parallel — it is:

A streaming system with a controlled execution boundary

This is exactly how:

  • High-performance servers

  • Simulation engines

  • Data pipelines

are designed internally.

Structured Concurrency in Julia: A Clean Approach Using @sync and Threads.@spawn

Concurrency is often introduced with complexity—locks, counters, race conditions, and subtle bugs. But Julia offers a refreshing alternative: structured concurrency, where parallel execution is expressed clearly and safely.

Let’s explore this concept through a simple, elegant example.

The Scenario

We simulate three students—Ridit, Ishan, and Manav—each working on a task that takes a different amount of time.

Instead of executing these tasks sequentially, we want them to run in parallel, and then proceed only when all are complete.

The Code:



function student_task(name::String, seconds::Real)
println(name, " is starting the task")
sleep(seconds)
println(name, " has finished the task")
return seconds
end

@sync begin
# Run with: julia --threads=4 (or any number > 1)
ridit = Threads.@spawn student_task("Ridit", 10.0)
ishan = Threads.@spawn student_task("Ishan", 15.0)
manav = Threads.@spawn student_task("Manav", 20.0)

println("Ridit took ", fetch(ridit), " seconds")
println("Ishan took ", fetch(ishan), " seconds")
println("Manav took ", fetch(manav), " seconds")
end

println("Now as all of the students have finished the task, the invigilator will leave")




Breaking It Down

1. Task Definition

student_task(name::String, seconds::Real)

Each task:

  • Announces when it starts

  • Sleeps for a given duration (simulating work)

  • Announces completion

  • Returns the time taken

This is a stand-in for any real workload—simulation steps, I/O operations, or compute-heavy routines.

2. Parallel Execution with Threads.@spawn

ridit = Threads.@spawn student_task("Ridit", 10.0)
  • Threads.@spawn launches a task asynchronously

  • Julia schedules it across available CPU threads

  • Returns a Task handle immediately

All three students start working concurrently, not one after another.

3. Synchronization with @sync

@sync begin
    ...
end

This is the centerpiece.

  • @sync waits for all spawned tasks inside its block to complete

  • It automatically tracks these tasks—no manual counting required

This is structured concurrency in action:

The lifetime of tasks is tied to the scope in which they are created.

4. Fetching Results Safely

fetch(ridit)
  • Retrieves the result of the task

  • If the task is not finished, it blocks until completion

Even though tasks run in parallel, fetch ensures correctness.

Execution Behavior

The tasks take:

  • Ridit → 10 seconds

  • Ishan → 15 seconds

  • Manav → 20 seconds

Because they run concurrently:

  • Total runtime ≈ 20 seconds, not 45 seconds

This is the key benefit: time compression via parallelism.

Structured Concurrency vs Traditional Approaches

In many languages, you would need:

  • Counters (like latches)

  • Explicit joins

  • Manual bookkeeping

Here, Julia abstracts that away.

Traditional thinking:

“Track how many tasks are left.”

Julia thinking:

“Everything started here must finish before leaving.”

This shift reduces:

  • Bugs

  • Cognitive load

  • Boilerplate code

Final Insight

This small example captures a powerful idea:

Concurrency should be structured, not improvised.

With @sync and Threads.@spawn, Julia gives you:

  • Clarity of intent

  • Safety by design

  • Performance with simplicity

Tuesday, April 7, 2026

Numerical integration - Trapezoidal vs Simpson - Reversing the wheel of learning - High end computer science is plain Maths


1. The Core Idea

We want to approximate:

abf(x)dx\int_a^b f(x)\,dx

by sampling the function at discrete points.

2. Trapezoidal Rule

Instead of rectangles, we use trapezoids.

abf(x)dxh2[f(x0)+2i=1n1f(xi)+f(xn)]

Key properties:

  • Error ~ O(h²)
  • Assumes linear variation between points

👉 Think: “connect the dots with straight lines”

3. Simpson’s Rule

Now we approximate using parabolas (quadratic fit).

abf(x)dxh3[f(x0)+4oddf(xi)+2evenf(xi)+f(xn)]\int_a^b f(x)\,dx \approx \frac{h}{3}\left[f(x_0) + 4\sum_{\text{odd}} f(x_i) + 2\sum_{\text{even}} f(x_i) + f(x_n)\right]

Key properties:

  • Error ~ O(h⁴) (much faster convergence)
  • Requires even number of intervals

Think: “fit smooth curves instead of straight lines”

4. The Real Question: Which is Better?

MethodAccuracyCostWhen it works best
TrapezoidalMedium   Low  Rough/oscillatory data
SimpsonHighMedium  Smooth functions

5. A Perfect Test Function

Use something smooth but non-trivial:

f(x)=sin(x),0Ï€sin(x)dx=2f(x) = \sin(x), \quad \int_0^\pi \sin(x)\,dx = 2

6. The source code (julia)

using Plots
gr()
f(x) = sin(x)

# Trapezoidal Rule
function trapezoidal(f, a, b, n)
h = (b - a) / n
s = 0.5 * (f(a) + f(b))
for i in 1:n-1
s += f(a + i*h)
end
return h * s
end

# Simpson Rule
function simpson(f, a, b, n)
h = (b - a) / n
s = f(a) + f(b)
for i in 1:n-1
x = a + i*h
s += (i % 2 == 0 ? 2 : 4) * f(x)
end
return (h/3) * s
end

# Setup
a, b = 0, pi
exact = 2.0
#exact = exp(pi) - 1

ns = [4, 8, 16, 32, 64, 128, 256]

h_vals = Float64[]
trap_err = Float64[]
simp_err = Float64[]

for n in ns
h = (b - a) / n
push!(h_vals, h)
t = trapezoidal(f, a, b, n)
s = simpson(f, a, b, n)
push!(trap_err, abs(t - exact))
push!(simp_err, abs(s - exact))
end

# Plot
p1 = plot(h_vals, trap_err,
xscale = :log10, yscale = :log10,
marker = :circle,
label = "Trapezoidal",
xlabel = "Step size (h)",
ylabel = "Error",
title = "Error vs Step Size")

# Plot 2: Simpson
p2 = plot(h_vals, simp_err,
xscale = :log10, yscale = :log10,
marker = :square,
title = "Simpson Error",
xlabel = "Step size (h)", ylabel = "Error",
label = "Simpson")

# Combine side-by-side
p3 = plot(p1, p2,
layout = (1, 2), # 1 row, 2 columns
size = (1200, 400)) # wide figure

display(p3)

wait()


Saturday, April 4, 2026

The ceiling of our Advanced Computing depends on the strength of the Mathematical floor - reversing the wheel of learning - delving into basic integral calculus - remembering Taylor series for Verlet Integration...

In software development, there is a common temptation to treat algorithms as black boxes. We often import a library, call a function, and celebrate when the simulation "looks right." However, as I’ve recently discovered while deep-diving into physics-based simulations, there comes a point where "looking right" isn't enough. To achieve stability, accuracy, and performance, you eventually have to put the car in reverse and head back to the fundamentals.

For me, that "reverse gear" moment happened with Verlet Integration.

Learning in Reverse

We are often taught to learn linearly: start with the theory, then the math, then the application. But in the trenches of technical development, we often do the opposite. We start with the code, run into a bug or a bottleneck, and then find ourselves peeling back layers of abstraction until we are staring at a calculus textbook.

This "reverse learning" is exactly how I ended up revisiting integral calculus. I wanted to understand how a particle "knows" where to go next without the numerical errors spiraling out of control.

The Case for Verlet Integration

In simple Euler integration, we calculate position based on velocity. It’s intuitive, but it’s "leaky"—energy is gained or lost due to rounding errors, and your simulation eventually explodes.

Verlet integration is the elegant solution to this. Instead of relying on stored velocity, it uses the current and previous positions to calculate the next state. 

The mathematical derivation relies on the **Taylor Series expansion**. By looking at the expansion for both the forward and backward time steps:

$$x(t + \Delta t) = x(t) + v(t)\Delta t + \frac{1}{2}a(t)\Delta t^2 + \dots$$

$$x(t - \Delta t) = x(t) - v(t)\Delta t + \frac{1}{2}a(t)\Delta t^2 - \dots$$

When you add these equations together, the velocity terms cancel out, leaving you with a robust formula for the next position:

$$x(t + \Delta t) \approx 2x(t) - x(t - \Delta t) + a(t)\Delta t^2$$

Why the Math Matters

Delving into the integral calculus behind these movements isn't just an academic exercise. It provides three critical advantages:

- Understanding the error terms allows you to build simulations that remain stable over long periods.

-When you understand the underlying math, you can often simplify expressions to reduce the number of floating-point operations.

You stop guessing why a collision failed or why a fluid simulation is behaving erratically; the math tells you exactly where the logic broke down

Final Thoughts

Computing is often described as the "art of the possible," but advanced computing is the **science of the precise**. If you find yourself hitting a wall in your simulations or graphics projects, don't be afraid to stop coding and start deriving. Sometimes, the fastest way forward is to go all the way back to the first principles of calculus.

Here's today's exploration of the basics of integral calculus.

The first integral that we’ll look at is the integral of a power of 

.

The general rule when integrating a power of  we add one onto the exponent and then divide by the new exponent. It is clear (hopefully) that we will need to avoid  in this formula. If we allow  in this formula we will end up with division by zero.

Next is one of the easier integrals but always seems to cause problems for people.

If you remember that all we’re asking is what did we differentiate to get the integrand this is pretty simple, but it does seem to cause problems on occasion.

Let’s now take a look at the trig functions.

Now, let’s take care of exponential and logarithm functions.

Finally, let’s take care of the inverse trig and hyperbolic functions.

My exploration continues...

In search of #WhoamI...

Here we go... how it all started - being inspired by my son's work in Blender...