An integrated framework for AI enhancement and assessment — using neuroscience-inspired adapters to improve models, backed by systematic bias research.
GENbAIs — General Efficient Neural bio-Adapter Intelligent Search — is an integrated framework with two complementary components: a bio-inspired model enhancement system that discovers lightweight adapters outperforming state-of-the-art fine-tuning, and a systematic bias detection framework that reveals hidden patterns in how AI systems think.
The enhancement system is the primary offering: validated on 20 benchmarks with a 17/3 win rate, it delivers measurable accuracy improvements by searching through an astronomical configuration space using neuroscience principles. The bias research informs and supports this work by helping identify the cognitive patterns and limitations that the adapters help overcome.
Instead of training models from scratch (expensive, slow) or accepting mediocre performance, GENbAIs adds small neuroscience-inspired adapters to existing frozen models. An intelligent search finds the best combinations out of an astronomically large configuration space — delivering results that beat standard LoRA fine-tuning.
50+ computational primitives inspired by biological neural processing:
Each adapter adds up to ~1% of model parameters, scaling with model size. Zero-initialized gates ensure no degradation at initialization.
With 50+ features, multiple layer positions, and ordering permutations, the configuration space reaches ~10²² possibilities.
We first find the optimal LoRA configuration, merge it permanently, then stack bio-adapters on top:
Winning configuration tested on held-out benchmarks spanning multiple task families:
Biological neural circuits have been optimized by evolution over millions of years. By translating their computational principles into lightweight adapter modules and letting intelligent search discover the best combinations, we tap into strategies that pure gradient descent doesn't naturally find.
The largest improvement (+14.7% on PAWS) came on adversarial paraphrase detection — suggesting bio features capture genuine semantic structure beyond surface-level lexical patterns.
Improvements span STS, pair classification, and clustering. This breadth indicates bio-adapters enhance general representation quality, not just task-specific shortcuts.
Bio features are stacked on top of the best LoRA configuration, proving they discover complementary optimization directions that standard PEFT methods miss.
~1,000 experiments out of 10²² possible (0.00000000000000001%) found configurations beating LoRA. Bayesian pruning eliminates dead ends before wasting compute.
The bio-adapter enhancement work is informed by a comprehensive bias detection framework. By understanding how AI systems exhibit cognitive biases and hidden patterns, we can design better mechanisms to correct them. This research tested 8 major LLMs across real-world scenarios — not artificial academic benchmarks — and revealed systematic bias signatures in every model tested.
Authentic news articles from multiple political perspectives, regions, and topics — not artificial test questions.
Natural questions people would actually ask about these stories, testing how AI frames real issues.
Each AI system analyzes all others' responses — cross-checking reveals patterns single evaluations miss.
Detection capability, self-awareness, consistency, objectivity, bias resistance, and self-application.
Every tested LLM injects significant bias into analytical tasks, even on politically neutral content.
Distinct bias signatures per company — each provider's RLHF training leaves identifiable patterns.
Models with similar bias scores can have wildly different cognitive abilities — simple rankings are misleading.
Rather than hand-designing fixes for known AI weaknesses, we let automated search over 50+ neuroscience-inspired primitives discover which mechanisms actually improve performance — and the results suggest surprising connections between biological computation and model robustness.
See the benchmark results, read the research, or get your model enhanced.