Thursday, April 30, 2026

Ancient Logic Meets Early AI: The Surprising Parallels Between Prolog and Sanskrit - Engineers of Bharat - wake up and embrace Sanskrit...


When we think about Artificial Intelligence, we usually picture modern neural networks, GPUs, and massive datasets. But the intellectual roots of AI go much deeper—into symbolic reasoning, formal logic, and surprisingly, ancient linguistic traditions. One of the most fascinating comparisons is between early AI systems built using Prolog and the structure of Sanskrit, one of the oldest and most rigorously defined languages in human history.

This is not a superficial analogy. At a structural and philosophical level, both Prolog and Sanskrit share striking similarities in how they represent knowledge, rules, and inference.

1. Rule-Based Systems: Sutras vs Clauses

Early AI systems, especially those built in Prolog, rely heavily on rules and facts. A Prolog program is essentially a knowledge base composed of logical clauses:

  • Facts: Statements that are always true

  • Rules: Conditional relationships that derive new truths

Similarly, Sanskrit—especially as formalized by the ancient grammarian Pāṇini—is built on a system of sutras (rules). These are concise, highly optimized statements that define how words are formed and how grammar operates.

Both systems:

  • Encode knowledge as compact rules

  • Allow complex structures to emerge from simple primitives

  • Depend on rule application rather than procedural steps

In a sense, Pāṇini’s grammar can be viewed as one of the earliest known “programs.”

2. Declarative Nature

Prolog is a declarative language. You don’t tell the system how to solve a problem—you tell it what is true, and the system figures out the rest through logical inference.

Sanskrit grammar operates similarly:

  • It defines what constitutes valid language

  • It does not prescribe step-by-step generation in a procedural sense

  • Instead, valid expressions are derived through rule application

This declarative paradigm is fundamentally different from imperative programming—and both Prolog and Sanskrit embody it elegantly.

3. Pattern Matching and Unification

One of the core mechanisms in Prolog is unification—a process of matching patterns and binding variables to satisfy logical conditions.

Example (conceptually):

parent(X, Y) :- mother(X, Y).

The system tries to match patterns and infer relationships.

In Sanskrit:

  • Word formation and sentence construction involve pattern transformations

  • Roots (dhatus) combine with suffixes following strict matching rules

  • Morphological changes depend on context-sensitive patterns

This resembles a form of linguistic unification, where structures are matched and transformed based on rules.

4. Backtracking and Multiple Interpretations

Prolog uses backtracking to explore multiple possible solutions. If one path fails, it goes back and tries another.

Sanskrit, especially in classical literature:

  • Allows multiple valid interpretations of a sentence

  • Meaning can depend on context, case endings, and word order

  • Ambiguity is resolved through structured inference

While Sanskrit doesn’t “execute” backtracking computationally, its structure supports multi-path interpretation, similar to logical exploration in Prolog.

5. Compositionality and Generative Power

Both systems are highly compositional:

  • In Prolog, small rules combine to solve complex problems

  • In Sanskrit, small grammatical units combine to generate vast expressive possibilities

This compositional nature leads to:

  • Scalability of expression

  • Elegant reuse of rules

  • High generative capacity from limited primitives

6. Knowledge Representation

Prolog was designed for symbolic AI, where knowledge is explicitly represented and reasoned about.

Sanskrit, particularly in philosophical and scientific texts:

  • Encodes knowledge in a structured, rule-based format

  • Maintains clarity and precision in meaning

  • Supports logical discourse in fields like mathematics, astronomy, and philosophy

This makes Sanskrit not just a language, but a knowledge representation system.

7. Minimalism and Compression

Pāṇini’s grammar is famous for its extreme brevity. Rules are compressed using meta-rules, recursion, and symbolic shorthand.

Prolog also encourages:

  • Minimal representations

  • Reusable logic

  • Compact expression of complex relationships

Both systems aim for maximum expressiveness with minimal redundancy—a hallmark of elegant design.

8. Philosophical Foundations

At a deeper level, both Prolog and Sanskrit emerge from traditions that value:

  • Logic over procedure

  • Structure over execution

  • Inference over instruction

Prolog comes from formal logic and computational theory. Sanskrit emerges from a philosophical tradition deeply concerned with language, meaning, and cognition.

The convergence is not accidental—it reflects a shared pursuit of modeling intelligence through structure.

Conclusion: Rediscovering Intelligence Through Structure

Modern AI has largely shifted toward data-driven approaches like deep learning. But the comparison between Prolog and Sanskrit reminds us of an alternative vision of intelligence—one rooted in rules, logic, and symbolic reasoning.

For developers, linguists, and AI researchers, this intersection offers a powerful insight:

Intelligence is not just about learning patterns from data—it is also about representing and manipulating knowledge with precision.

In that sense, ancient Sanskrit and early AI are not distant domains—they are parallel explorations of the same fundamental question:

How can structured rules give rise to intelligent behavior?

If we revisit these ideas with modern tools, we may find that the future of AI is not just in neural networks—but also in rediscovering the elegance of symbolic systems that civilizations mastered thousands of years ago.

Let’s make this concrete with a small Prolog program, and then examine it through the lens of Sanskrit grammar and structure—not as a metaphor, but as a structural comparison.


🔹 A Simple Prolog Program

% Facts
father(ram, shyam).
father(ram, sita).
mother(gita, shyam).
mother(gita, sita).

% Rule
parent(X, Y) :- father(X, Y).
parent(X, Y) :- mother(X, Y).

% Rule for sibling relationship
sibling(X, Y) :- parent(Z, X), parent(Z, Y), X \= Y.

What this program does:

  • Defines facts about family relationships

  • Defines rules to infer:

    • Who is a parent

    • Who are siblings

Example query:

?- sibling(shyam, sita).

Output:

true.

Now: Analysis Through the Lens of Sanskrit

We’ll map key Prolog concepts to structural principles found in Sanskrit, especially in the grammatical system of Pāṇini and his work, the Ashtadhyayi.

1. Facts as “Pratijñā” (Given Truths)

In Prolog:

father(ram, shyam).

This is an atomic truth.

In Sanskrit:

  • This resembles a semantic assertion, like:

    • रामः श्यामस्य पिता अस्ति (Rāma is Shyama’s father)

In the Paninian system:

  • Such statements are not “computed”

  • They are accepted inputs to the system

👉 Parallel:

  • Prolog facts = Given semantic truths (pratijñā-like statements)

2. Rules as Sutras (सूत्र)

Prolog rule:

parent(X, Y) :- father(X, Y).

This reads:

X is a parent of Y if X is a father of Y

In Sanskrit grammar:

  • A sutra defines transformation or classification rules

  • Example idea (not literal):

    • “If a root has property X, apply suffix Y”

👉 Both share:

  • Conditional structure

  • Minimal expression

  • High reuse

👉 Key insight:

  • Prolog rules behave like generative sutras—they don’t store outcomes, they define how to derive them

3. Variables as “Anubandha” (Markers / Placeholders)

In Prolog:

parent(X, Y)
  • X and Y are placeholders

In Sanskrit grammar:

  • Pāṇini uses markers (anubandhas) and abstract symbols

  • These are not actual words but meta-linguistic variables

👉 Parallel:

  • Prolog variables ≈ Paninian symbolic placeholders

They:

  • Do not carry meaning themselves

  • Gain meaning through substitution

4. Unification vs Sandhi / Morphological Matching

Prolog uses unification:

  • It tries to match:

parent(Z, X), parent(Z, Y)

In Sanskrit:

  • Word formation uses rule-based matching

  • Example:

    • Roots + suffixes combine only if conditions match

    • Sandhi rules merge sounds based on patterns

👉 Parallel:

  • Prolog unification ≈ rule-based linguistic matching

Both systems:

  • Depend on pattern compatibility

  • Apply transformations only when constraints are satisfied

5. Backtracking vs Interpretive Flexibility

In Prolog:

  • If one rule fails, it backtracks and tries another

In Sanskrit:

  • A sentence can allow multiple valid parses

  • Meaning emerges from:

    • case endings (vibhakti)

    • context

    • syntactic relations

Example:

  • Word order is flexible, but meaning is preserved via rules

👉 Parallel:

  • Prolog backtracking ≈ multi-path interpretation in Sanskrit parsing

6. The Sibling Rule as a Composite Sutra

sibling(X, Y) :- parent(Z, X), parent(Z, Y), X \= Y.

This is powerful:

  • It composes multiple rules

  • Introduces a constraint

In Sanskrit:

  • Complex constructions emerge from:

    • multiple interacting sutras

    • constraint rules (like “not equal” conditions in morphology)

👉 This resembles:

  • compound rule application (samāsa-like compositionality)

7. Negation Constraint (X = Y)

This part:

X \= Y

Means:

  • X and Y must be different

In Sanskrit:

  • There are blocking rules (नियम / प्रतिबंध)

  • Certain forms are prevented under specific conditions

👉 Parallel:

  • Logical negation ≈ grammatical restriction rules

8. Knowledge Emergence

Important insight:

  • Nowhere did we explicitly define:

sibling(shyam, sita).

Yet it emerges.

In Sanskrit:

  • Infinite valid sentences are generated from:

    • finite rules (sutras)

👉 Both systems:

  • Are generative, not enumerative

Deep Insight: Computation vs Derivation

ConceptPrologSanskrit
KnowledgeStored as factsEncoded via roots & meanings
RulesLogical clausesSutras
ExecutionQuery resolutionDerivation (prakriya)
EngineBacktracking searchRule ordering + constraints
OutputLogical truthValid linguistic expression

Final Thought

If you look carefully, this Prolog program is not “code” in the modern imperative sense.

It is closer to a derivation system—and that is exactly what Sanskrit grammar is.

👉 Both answer the same deep question:

How can a finite set of rules generate an infinite space of valid structures?

That’s why many researchers—from early AI pioneers to modern computational linguists—have seen Sanskrit not just as a language, but as a formal system of knowledge representation, remarkably aligned with symbolic AI like Prolog.

Read here...


Monday, April 27, 2026

Parents are the best Guru - the reason why I rediscovered myself as the Guru of my young son...

 Taken from a X post...


They called it caste.

Look again.

A child sits beside his craftsman father.

Not in a classroom. Not with a certificate.

But inside a living workshop.

Hands learning before language does.

Skill transferring without textbooks.

No degree. No loan. No placement cell.

Just immersion.

This is apprenticeship.

Generation to generation.

Precision built through repetition, not exams.

And then we reframed it.

From *knowledge system* → to *social problem*.

From *skill inheritance* → to *rigid label*.

Yes, hierarchies existed. Yes, distortions happened.

But pause before flattening everything into one word.

Because something else was happening here too-

A self-sustaining skill economy.

No HR. No résumé. No unemployment portal.

Today?

We spend ₹10–20 lakh on degrees…

to still “learn on the job.”

So ask-

Did we reform a system…

or replace it with a costlier, slower one?

And more importantly-

Who lost more in that transition?

Wake up... the Hindu community of Bharat... The clock is ticking...

You will have to touch the ground running if you want to survive...

Here's what it takes to create a software engineer.

My young son, Ridit's tech blog.


Friday, April 17, 2026

Active Object Design Pattern - and a simple implementation using Julia - from Active Object paradigm of Symbian S60 to today's journey in Julia - a checkered career...

The Active Object Design Pattern is a concurrency pattern that decouples method invocation from method execution, allowing tasks to run asynchronously without blocking the caller.

At its core, an Active Object introduces a proxy that clients interact with. Instead of executing methods directly, the proxy places requests into a queue. A separate worker thread (or pool) processes these requests in the background. This creates a clean separation between what needs to be done and when/how it gets executed.

A typical Active Object system has four key components:

  • Proxy – exposes the interface to the client

  • Method Request – encapsulates a function call as an object or callable

  • Activation Queue – holds pending requests

  • Scheduler/Worker – executes requests asynchronously

This pattern is especially useful when:

  • You want to avoid blocking the main thread

  • You need controlled concurrency (e.g., limited worker threads)

  • You want to serialize access to shared resources safely

Here's a simple implementation of Active Object Design Pattern in Julia.


function ThreadedActiveObject(nworkers=4)
ch = Channel{Function}(32)
tasks = []

for _ in 1:nworkers
push!(tasks, Threads.@spawn begin
for job in ch
Base.invokelatest(job)
end
end)
end

return ch, tasks
end

function heavy_compute(n)
s = 0.0
for i in 1:n
s += sin(i) * cos(i)
end
println("Computed sum for $n = $s on thread $(Threads.threadid())")
end

ao, tasks = ThreadedActiveObject(4)

for i in 1:10
put!(ao, () -> heavy_compute(10^7 + i))
end

close(ao)
foreach(wait, tasks)


Sequence Diagram:



Mapping The Code to Active Object Components

Let’s reinterpret the code piece by piece.

Proxy (Client Interface)

put!(ao, () -> heavy_compute(10^7 + i))

This is the proxy layer.

Why?

  • The caller is not executing the method directly
  • Instead, it:
    • wraps the request as a function (closure)
    • submits it to a queue

In classic Active Object:

proxy.method_call() → enqueue request

In my code:

put!(ao, job_function)

So:

The Channel (ao) acts as the proxy interface

Activation Queue

ch = Channel{Function}(32)

This is the Activation Queue.

  • Holds pending method requests
  • Thread-safe
  • Decouples producer and consumer

Classic role:

Queue<Request>

My version:

Channel{Function}

Each Function = a method request object

Method Request

() -> heavy_compute(10^7 + i)

This is a Method Request object, just expressed as a closure.

In traditional OO:

class PrintTask : MethodRequest {
void execute() { ... }
}

In Julia:

() -> heavy_compute(10^7 + i)

Key idea:

  • Encapsulates:
    • what to do
    • data (i)
    • logic

Scheduler + Servant (Worker Threads)

Threads.@spawn begin
for job in ch
Base.invokelatest(job)
end
end

This block plays two roles:

Scheduler

for job in ch
  • Pulls requests from the queue
  • Decides execution order (FIFO here)

This is the scheduler

Servant

Base.invokelatest(job)
  • Actually executes the request

This is the servant

So each worker thread is:

[ Scheduler + Servant ]

Thread Pool (Multiple Active Objects Workers)

for _ in 1:nworkers
Threads.@spawn ...
end
  • Creates multiple workers
  • All consume from the same queue

This is a multi-threaded Active Object

Classic pattern often has:

  • 1 thread → 1 active object

My version:

N threads → shared activation queue

This is more like:

  • Active Object + Thread Pool hybrid

Lifecycle Control

Closing the queue

close(ao)
  • Signals: no more requests
  • Workers stop after finishing remaining jobs

Waiting for completion

foreach(wait, tasks)
  • Ensures all scheduled work completes
And here's my journey through the Symbian S60's Active Object paradigm - studied many years ago...


A story on my checkered software journey through the wilderness of concurrent programming - one important lesson - basic idea remains the same across platforms...



There’s a certain kind of journey that doesn’t show up on résumés—the kind that winds through late nights, cryptic bugs, half-working abstractions, and those rare moments when something finally clicks. My journey through concurrent programming has been exactly that: a long walk through a wilderness where the landscape keeps changing, but the underlying terrain remains strangely familiar.

I started in the era of VC++. Back then, concurrency felt mechanical—almost industrial. I didn’t “design” concurrent systems; I wrestled them into submission. WAIT_FOR_SINGLE_OBJECT, WAIT_FOR_MULTIPLE_OBJECTS—these weren’t just API calls to me; they were survival tools. I learned quickly that a missed signal or a mishandled handle could freeze everything into a silent deadlock. Threads were powerful, yes—but also unforgiving. I treated them with respect because I had no choice.

Then I encountered Symbian, and with it, the idea of Active Objects. That was a turning point for me. Instead of manually juggling threads, I began thinking in terms of events, schedulers, and requests. The system was still concurrent, but the chaos felt… organized. I wasn’t blocking threads anymore; I was orchestrating asynchronous flows. It felt like moving from brute force to something more deliberate.

It wasn’t effortless, though. The Active Object model demanded discipline. I had to understand request lifecycles, callbacks, and the implicit contract with the scheduler. But once it clicked, it changed how I saw concurrency—not just as parallel execution, but as controlled responsiveness.

Then came Android and Java. By that time, concurrency had evolved into something more structured. Executors, thread pools, futures, synchronized blocks—the tools were richer, the abstractions deeper. I could finally express intent more clearly: run this task, schedule that work, wait for these results. The raw edges of thread management were softened.

But I also realized something important: the old problems never really went away. Deadlocks still happened. Race conditions still crept in. Performance was still a careful balancing act. The tools had improved, but the responsibility was still mine.

And then I found Julia.

Julia felt different from the start. Lightweight tasks, channels, @async, Threads.@spawn—it was concurrency with a kind of fluidity I hadn’t experienced before. I could write code that looked almost sequential, yet behaved concurrently. Channels made communication feel natural again.

I remember watching tasks communicate over a channel, work flowing across threads, and thinking—this feels like everything I’ve learned, coming together. The low-level discipline from VC++, the event-driven thinking from Symbian, the structured abstractions from Java—they were all there, just distilled into something simpler and more expressive.

That’s when the most important lesson became clear to me.

Across all these platforms, languages, and paradigms—the surface keeps changing. APIs evolve. Syntax improves. Abstractions come and go.

But the core idea doesn’t change.

At its heart, concurrent programming has always been about a few simple truths:

  • Work can happen independently.

  • Coordination is harder than execution.

  • Communication is everything.

  • And timing… timing is where things either work beautifully or fall apart.

Whether I’m waiting on a kernel object, scheduling an active request, submitting tasks to an executor, or passing messages through a channel—I’m solving the same fundamental problem. I’m managing time, state, and interaction.

The wilderness hasn’t disappeared for me. I’ve just learned how to navigate it.

And maybe that’s what this journey really is—not moving from one technology to another, but moving from confusion to clarity. From fighting concurrency… to understanding it.

Thursday, April 16, 2026

Inter thread communication in Julia - Channel and Wait-Notify...

There are a few ways we can accomplish inter-thread communication in Julia. In this article, we will look into two ways - via channel and via wait & notify.

Via Channel...

Here's an implementation of the Producer-Consumer problem in Julia using two threads. The producer creates and puts it in a channel, whereas the consumer takes it from the channel. The producer and consumer run in two different threads.

using Base.Threads

ch = Channel{Int}(10)

@sync begin

# Producer (runs on a thread)
Threads.@spawn begin
for i in 1:5
println("Producing $i on thread $(threadid())")
put!(ch, i)
sleep(0.5)
end
close(ch)
end

# Consumer (runs on another thread)
Threads.@spawn begin
for val in ch
println("Consuming $val on thread $(threadid())")
end
end

end

Let's try to dissect the above code.

The Key Idea: Channels are Iterable Streams

In Julia, a Channel is not just a queue—it implements the iteration protocol.

So when we write:

for val in ch
    println("Consuming $val on thread $(threadid())")
end

this is conceptually equivalent to:

while true
    val = take!(ch)   # blocks if empty
    println("Consuming $val on thread $(threadid())")
end

…but with one crucial addition:

The loop automatically stops when the channel is closed.

What Actually Happens Internally

When Julia executes:

for val in ch

it translates roughly into:

state = iterate(ch)

while state !== nothing
    (val, next_state) = state
    println(...)
    state = iterate(ch, next_state)
end

For Channel, iterate(ch) is defined such that:

  • It internally calls take!(ch)

  • If data is available → returns (value, state)

  • If the channel is:

    • empty but open → it blocks (yields the task)

    • closed and empty → returns nothing → loop ends

Why This is Perfect for Concurrency

Let’s break down the behavior in your example:

Producer Thread

put!(ch, i)
  • Pushes data into the channel

  • If the channel buffer is full → producer blocks

Consumer Thread

for val in ch
  • If data is available → consumes immediately

  • If empty → consumer blocks (non-busy wait!)

  • If channel is closed → loop exits cleanly

Important: This is NOT Busy Waiting

A common mistake in other languages:

while(queue.empty()) { /* spin */ }

But Julia does this instead:

  • The consumer yields control

  • The scheduler runs another task

  • When put! happens → consumer is resumed

This is cooperative scheduling, not CPU spinning.

Why for val in ch is Better Than take! Loop

We  could write:

while true
    val = take!(ch)
    println(val)
end

But then we must manually handle termination:

  • How do we know when to stop?

  • We'd need a sentinel value or extra signaling

With:

for val in ch

we get:

  • Automatic blocking

  • Automatic wake-up

  • Automatic termination on close(ch)

  • Cleaner, declarative code

The Role of close(ch)

This line in our producer is critical:

close(ch)

Without it:

  • The consumer will wait forever

  • Because it assumes more data might come

With it:

  • The iteration ends naturally

  • for loop exits → task completes

Subtle but Important Detail

Even though we used:

Threads.@spawn

The channel itself is thread-safe, meaning:

  • Multiple producers/consumers can safely operate

  • Synchronization is handled internally

Final Insight

Consumer code:

for val in ch

is not just syntactic sugar—it encodes three things at once:

  1. Blocking synchronization (wait for data)

  2. Data flow semantics (consume stream)

  3. Termination protocol (stop on close)

That’s why it’s considered idiomatic Julia concurrency.

Via Event/Condition

This is pretty good for implementing signalling between threads.

Here's the code for such a system.

using Base.Threads

mtx = ReentrantLock()
cond = Base.GenericCondition(mtx) #bind lock + condition

@sync begin
Threads.@spawn begin
for i in 1:5
println(i)
sleep(1)
end

println("Now Waitin...")

lock(mtx) do
wait(cond) #correct lock
end

println("Resuming!")

for j in 6:10
println(j)
sleep(1)
end
end

sleep(10)

lock(mtx) do
notify(cond) #SAME lock
end
end

Core Idea

A Condition in Julia is a wait queue tied to a lock.

Threads can:

  • wait(cond) → sleep until signaled
  • notify(cond) → wake waiting thread(s)

What happens in the code?

  1. Prints numbers 1 → 5
  2. Acquires lock and calls:

    wait(cond)
  3. Internally:
    • Releases mtx
    • Goes to sleep
    • Gets queued on cond

The thread is now blocked without consuming CPU

Main Thread (Producer / Notifier)

sleep(10)

lock(mtx) do
notify(cond)
end

What happens:

  1. After 10 seconds, main thread:
    • Acquires the same lock (mtx)
    • Calls notify(cond)
  2. This:
    • Wakes the waiting thread
    • That thread re-acquires the lock
    • Continues execution

Execution Timeline

Worker Thread Main Thread
-------------- -------------
1 → 5 printed
Now Waiting...
(wait → sleep)

sleep(10)
notify(cond)

Resuming!
6 → 10 printed

Critical Rules

1. Lock must be held

Both must be inside:

lock(mtx) do ... end

Otherwise:

ConcurrencyViolationError("lock must be held")

Same lock everywhere

cond = GenericCondition(mtx)

👉 You must use this exact mtx for:

  • wait
  • notify

wait is atomic

When calling:

wait(cond)

Julia:

  1. Releases lock
  2. Sleeps
  3. On notify → wakes up
  4. Re-acquires lock

This avoids race conditions

Conceptual Model

Think of it like:

Condition = Lock + Queue of waiting threads
  • wait → join queue
  • notify → wake one (or all)

When to use this?

Use Condition variables when:

  • You need pure signaling
  • No data needs to be transferred

Use Channel when:

  • You need data + synchronization

Wednesday, April 15, 2026

@async in Julia is not parallelism - but @Thread.spawn is...

 The Core Idea

In Julia, @async gives you concurrency, not parallelism.

  • @async → runs multiple tasks, but on the same thread
  • Threads.@spawn → runs tasks on multiple CPU threads (true parallelism)

Demonstration 1

function task(name)
println("$name running on thread ", Threads.threadid())
sleep(1)
println("$name finished on thread ", Threads.threadid())
end

@sync begin
@async task("Task A")
@async task("Task B")
end

If we run the above code, we will get 

Task A running on thread 1

Task B running on thread 1

Task A finished on thread 1

Task B finished on thread 1

What is happening internally?

Julia uses cooperative scheduling for @async:

  • Tasks yield control (e.g., sleep, I/O)
  • Scheduler switches between them
  • But execution remains on one OS thread

As you can see from the output, both tasks are running on the same thread.

Demonstration 2

function task(name)
println("$name running on thread ", Threads.threadid())
sleep(1)
println("$name finished on thread ", Threads.threadid())
end

@sync begin
Threads.@spawn task("Task A")
Threads.@spawn task("Task B")
end

If we run the above code, we will get

Task B running on thread 3
Task A running on thread 4
Task B finished on thread 3
Task A finished on thread 4

As you can see from the output, two threads are running in parallel.

Summary


Feature @async Threads.@spawn
Threads used    Single thread    Multiple threads
Parallelism        No        Yes
Concurrency        Yes        Yes
Best forI/O, networking        CPU-heavy tasks
SchedulingCooperativePreemptive (OS threads)

Sunday, April 12, 2026

IPC - Incremental Potential Contact - implemented using Julia...

Scientists have started noticing Julia.

The word spreads.

 Not through a marketing campaign.

But seeing the Python-like ease of code writing, along with the speed of C++.

- Physicists liked that Julia felt like writing equations.
- Engineers liked that it handled performance without boilerplate.
- Researchers loved that they could:

  • Prototype quickly
  • Scale to HPC when needed
  • Avoid rewriting everything in another language

The two-language problem vanished.

A prototype can be scaled to a production stage without much effort.

I am loving it.

I studied Incremental Potential Contact (IPC) some time ago and implemented it in Python. Today I used Julia to explore IPC.

Here's the visualization.

And here's the source code...

using LinearAlgebra
using Plots
using Printf

# -------------------------------
# IPC-like Force (Barrier-inspired)
# -------------------------------
function ipc_force(p0::Vector{Float64}, p1::Vector{Float64},
kappa::Float64, d_hat::Float64)

diff = p1 - p0
d = norm(diff)

eps = 1e-6
d_safe = max(d, eps)

if d >= d_hat
return zeros(3), zeros(3)
end

# Barrier-like force
f_mag = kappa * (1 / d_safe - 1 / d_hat)

n = diff / d_safe

f0 = f_mag * n
f1 = -f0

return f0, f1
end


# -------------------------------
# Simulation Core
# -------------------------------
function simulate(; dt=0.002, steps=500,
kappa=1.0, d_hat=0.02, mass=1.0)

p0 = [-0.02, 0.0, 0.0]
p1 = [ 0.02, 0.0, 0.0]

# 🔥 stronger motion
v0 = [0.5, 0.0, 0.0]
v1 = [-0.5, 0.0, 0.0]

traj0 = Vector{Vector{Float64}}()
traj1 = Vector{Vector{Float64}}()
distances = Float64[]

push!(traj0, copy(p0))
push!(traj1, copy(p1))
push!(distances, norm(p1 - p0))

for step in 1:steps
f0, f1 = ipc_force(p0, p1, kappa, d_hat)

v0 += dt * f0 / mass
v1 += dt * f1 / mass

# ✅ ADD DAMPING HERE
damping = 0.99
v0 *= damping
v1 *= damping

p0 += dt * v0
p1 += dt * v1

d = norm(p1 - p0)

@printf("Step %d: distance = %.6f\n", step, d)

push!(traj0, copy(p0))
push!(traj1, copy(p1))
push!(distances, d)
end

return traj0, traj1, distances
end

# -------------------------------
# Visualization (Animation + Graph)
# -------------------------------
function visualize(traj0, traj1, distances; dt=0.0002,
filename="ipc_simulation.gif")

time = collect(0:dt:dt*(length(distances)-1))

anim = @animate for i in 1:length(traj0)

pos0 = traj0[i]
pos1 = traj1[i]

# ---- LEFT: particle motion ----
p1_plot = scatter(
[pos0[1], pos1[1]],
[pos0[2], pos1[2]],
xlim=(-0.03, 0.03),
ylim=(-0.02, 0.02),
markersize=8,
label="Particles",
title="IPC Collision Avoidance"
)

plot!(
[pos0[1], pos1[1]],
[pos0[2], pos1[2]],
label="distance"
)

# ---- RIGHT: distance vs time ----
p2_plot = plot(
time[1:i],
distances[1:i],
xlabel="Time",
ylabel="Distance",
title="Distance vs Time",
label="d(t)"
)

hline!([0.02], linestyle=:dash, label="d_hat")

# ---- Combine both ----
plot(p1_plot, p2_plot, layout=(1,2), size=(900,400))
end

gif(anim, filename, fps=30)
end


# -------------------------------
# Main
# -------------------------------
function main()
println("Running IPC simulation with diagnostics...")

traj0, traj1, distances = simulate()

println("Generating animation with distance plot...")

visualize(traj0, traj1, distances)

println("Done. Check ipc_simulation.gif")
end


# Run
main()

Here's my earlier investigation of IPC (using Python)...





Happy code digging...