Curtin University · Applied AI Research

LocoLab

Frontier AI on a budget. Crazy, right? We study what local AI can actually do on modest hardware -- and build tools for educators, students, and organisations who need privacy-first AI.

Read the Docs GitHub
Projects

Four projects. One lab. Zero cloud dependency.

Each project tackles a different dimension of local AI -- from fine-tuning and routing to benchmarking, multi-GPU research, and AI-powered teaching scenarios.

🔮

LocoLLM

A routed swarm of tiny specialist models. Fine-tuned LoRA adapters behind a smart router, all running on consumer hardware. The flagship project.

locollm.org →
🛠

LocoBench

Systematic benchmarking of LLM inference across every consumer GPU VRAM tier. The floor, not the ceiling. If it runs here, it runs on your card.

locobench.org →
🚚

LocoConvoy

Multi-GPU parallelism on consumer PCIe hardware. Load balancing, Mixture of Agents, and speculative decoding without NVLink.

lococonvoy.org →
🎭

LocoEnsayo

AI-populated rehearsal environments for professional education. Students practice in realistic simulated organisations before the stakes are real.

locoensayo.org →
Research

Six studies. From floor-level AI to classroom interaction.

Our research explores what local AI can do for education, how students interact with AI systems, and what small models actually achieve on consumer hardware.

💬

Keep Asking — Study 1

Does a conversational nudge shift students from passive delegation to active conversation and improve task outcomes? Tested with frontier models.

Active

Keep Asking — Study 2

Can nudged students using a weak local model match un-nudged students on a frontier model? Reframing AI equity as a habits problem.

Planned
🧠

Cognitive Strategy Transfer

Framework for understanding how cognitive strategies transfer across AI-assisted learning contexts. 4-paper series.

Active
🎓

DSR AI Education Simulation

Design science research on AI-powered education simulations.

Active
📏

Context Length Effects

How context window size affects small language model performance on consumer hardware.

Planned

Perceived Intelligence vs Token Rate

Relationship between perceived AI intelligence and token generation speed.

Planned
All Research →

Six machines. All secondhand. All serious.

The entire fleet was assembled opportunistically -- the right capability at the right price, not a planned procurement. Total cost: well under $1,000 AUD.

Machine Role Key Hardware
Colmena Multi-GPU inference hive, LocoBench primary WEIHO 8-GPU chassis, floor cards per VRAM tier
Burro Overnight fine-tuning IBM x3500 M4, Tesla P100 16 GB HBM2
Cerebro LocoEnsayo inference host Ryzen 5 2600, RTX 2060 Super 8 GB
Peque Reference floor node Dell Optiplex 990, GTX 1650 OC 4 GB
Tortuga Multi-GPU test bench ASUS B250 Mining Expert, 19x PCIe, 1250W PSU
Poco Remote terminal, Apple Silicon testing MacBook M1, 16 GB unified memory
Philosophy

Constraints are the research design.

We are not mapping the floor of what AI can do because we cannot afford the ceiling. We are mapping the floor because most people live there, and nobody is documenting it honestly.

The institutional restriction that started LocoLab was not an obstacle. It was a clarification. It forced the question: what is AI actually for in an education context? The answer -- amplifying thinking, not replacing it -- turned out to be better served by local small models than by frontier alternatives.

Secondhand hardware, small models, and honest baselines produce research that the A100 crowd is not doing -- not because they could not, but because the question only becomes visible from the floor.

Loco by name. Serious by intent.

Contact

Say hello.

LocoLab is a School of Marketing and Management initiative at Curtin University, Perth, Western Australia. Whether you're a student, researcher, or just curious -- we'd love to hear from you.

Project Lead: Michael Borck

Get in Touch GitHub