Frontier AI on a budget. Crazy, right? We study what local AI can actually do on modest hardware -- and build tools for educators, students, and organisations who need privacy-first AI.
Each project tackles a different dimension of local AI -- from fine-tuning and routing to benchmarking, multi-GPU research, and AI-powered teaching scenarios.
A routed swarm of tiny specialist models. Fine-tuned LoRA adapters behind a smart router, all running on consumer hardware. The flagship project.
locollm.org →Systematic benchmarking of LLM inference across every consumer GPU VRAM tier. The floor, not the ceiling. If it runs here, it runs on your card.
locobench.org →Multi-GPU parallelism on consumer PCIe hardware. Load balancing, Mixture of Agents, and speculative decoding without NVLink.
lococonvoy.org →AI-populated rehearsal environments for professional education. Students practice in realistic simulated organisations before the stakes are real.
locoensayo.org →Our research explores what local AI can do for education, how students interact with AI systems, and what small models actually achieve on consumer hardware.
Does a conversational nudge shift students from passive delegation to active conversation and improve task outcomes? Tested with frontier models.
ActiveCan nudged students using a weak local model match un-nudged students on a frontier model? Reframing AI equity as a habits problem.
PlannedFramework for understanding how cognitive strategies transfer across AI-assisted learning contexts. 4-paper series.
ActiveDesign science research on AI-powered education simulations.
ActiveHow context window size affects small language model performance on consumer hardware.
PlannedRelationship between perceived AI intelligence and token generation speed.
PlannedThe entire fleet was assembled opportunistically -- the right capability at the right price, not a planned procurement. Total cost: well under $1,000 AUD.
| Machine | Role | Key Hardware |
|---|---|---|
| Colmena | Multi-GPU inference hive, LocoBench primary | WEIHO 8-GPU chassis, floor cards per VRAM tier |
| Burro | Overnight fine-tuning | IBM x3500 M4, Tesla P100 16 GB HBM2 |
| Cerebro | LocoEnsayo inference host | Ryzen 5 2600, RTX 2060 Super 8 GB |
| Peque | Reference floor node | Dell Optiplex 990, GTX 1650 OC 4 GB |
| Tortuga | Multi-GPU test bench | ASUS B250 Mining Expert, 19x PCIe, 1250W PSU |
| Poco | Remote terminal, Apple Silicon testing | MacBook M1, 16 GB unified memory |
We are not mapping the floor of what AI can do because we cannot afford the ceiling. We are mapping the floor because most people live there, and nobody is documenting it honestly.
The institutional restriction that started LocoLab was not an obstacle. It was a clarification. It forced the question: what is AI actually for in an education context? The answer -- amplifying thinking, not replacing it -- turned out to be better served by local small models than by frontier alternatives.
Secondhand hardware, small models, and honest baselines produce research that the A100 crowd is not doing -- not because they could not, but because the question only becomes visible from the floor.
Loco by name. Serious by intent.
LocoLab is a School of Marketing and Management initiative at Curtin University, Perth, Western Australia. Whether you're a student, researcher, or just curious -- we'd love to hear from you.
Project Lead: Michael Borck