Overview

GENbAIs — General Efficient Neural bio-Adapter Intelligent Search — is an integrated framework with two complementary components: a bio-inspired model enhancement system that discovers lightweight adapters outperforming state-of-the-art fine-tuning, and a systematic bias detection framework that reveals hidden patterns in how AI systems think.

The enhancement system is the primary offering: validated on 20 benchmarks with a 17/3 win rate, it delivers measurable accuracy improvements by searching through an astronomical configuration space using neuroscience principles. The bias research informs and supports this work by helping identify the cognitive patterns and limitations that the adapters help overcome.

Primary ComponentBio-Inspired Model Enhancement

+2.05Avg Improvement
85%Win Rate (17/3)
+14.7%Best Single Gain
50+Bio Mechanisms
~1KExperiments Run
10²²Search Space

Instead of training models from scratch (expensive, slow) or accepting mediocre performance, GENbAIs adds small neuroscience-inspired adapters to existing frozen models. An intelligent search finds the best combinations out of an astronomically large configuration space — delivering results that beat standard LoRA fine-tuning.

Method

1

Bio-Feature Library

50+ computational primitives inspired by biological neural processing:

  • Predictive coding — error-driven learning from cortical hierarchies
  • Lateral inhibition — competitive feature sharpening
  • Hebbian learning — "neurons that fire together wire together"
  • Dendritic computation — nonlinear processing within neurons
  • Homeostatic scaling — activity normalization for stable learning

Each adapter adds up to ~1% of model parameters, scaling with model size. Zero-initialized gates ensure no degradation at initialization.

2

Intelligent Search (Thompson Sampling)

With 50+ features, multiple layer positions, and ordering permutations, the configuration space reaches ~10²² possibilities.

  • Stage 1: Sweep individual features at each layer
  • Stage 2: Pair top features, track synergies and conflicts
  • Stage 3: Extend winning combos with additional features
  • Result: Optimal config found in ~1,000 experiments
3

Stack on Best LoRA

We first find the optimal LoRA configuration, merge it permanently, then stack bio-adapters on top:

  • 5 LoRA configs tested with 3 seeds each
  • Best LoRA merged permanently into base weights
  • Bio features trained on top with residual mixing
4

Validate on 20 Benchmarks

Winning configuration tested on held-out benchmarks spanning multiple task families:

  • Semantic textual similarity — STS12–16, SICK-R, STSb
  • Pair classification — PAWS, QQP, MRPC, SNLI, MNLI
  • Clustering — 20 Newsgroups
  • Result: 17 wins, 3 losses across 20 metrics
Why this works

Biological neural circuits have been optimized by evolution over millions of years. By translating their computational principles into lightweight adapter modules and letting intelligent search discover the best combinations, we tap into strategies that pure gradient descent doesn't naturally find.

Key Enhancement Findings

Adversarial Robustness Gains

The largest improvement (+14.7% on PAWS) came on adversarial paraphrase detection — suggesting bio features capture genuine semantic structure beyond surface-level lexical patterns.

Broad Task Coverage

Improvements span STS, pair classification, and clustering. This breadth indicates bio-adapters enhance general representation quality, not just task-specific shortcuts.

Additive Over LoRA

Bio features are stacked on top of the best LoRA configuration, proving they discover complementary optimization directions that standard PEFT methods miss.

Efficient Exploration

~1,000 experiments out of 10²² possible (0.00000000000000001%) found configurations beating LoRA. Bayesian pruning eliminates dead ends before wasting compute.

Supporting ResearchSystematic AI Bias Detection

The bio-adapter enhancement work is informed by a comprehensive bias detection framework. By understanding how AI systems exhibit cognitive biases and hidden patterns, we can design better mechanisms to correct them. This research tested 8 major LLMs across real-world scenarios — not artificial academic benchmarks — and revealed systematic bias signatures in every model tested.

8Models Tested
2,960Responses
5,807Bias Instances
6Cognitive Dimensions

Research Methodology

1

Gather Real-World Content

Authentic news articles from multiple political perspectives, regions, and topics — not artificial test questions.

2

Create Realistic Questions

Natural questions people would actually ask about these stories, testing how AI frames real issues.

3

Cross-Model Evaluation

Each AI system analyzes all others' responses — cross-checking reveals patterns single evaluations miss.

4

Six-Dimensional Profiling

Detection capability, self-awareness, consistency, objectivity, bias resistance, and self-application.

Key Bias Findings

Universal Bias Injection

Every tested LLM injects significant bias into analytical tasks, even on politically neutral content.

Corporate Fingerprints

Distinct bias signatures per company — each provider's RLHF training leaves identifiable patterns.

Hidden Cognitive Differences

Models with similar bias scores can have wildly different cognitive abilities — simple rankings are misleading.

How bias research supports enhancement

Rather than hand-designing fixes for known AI weaknesses, we let automated search over 50+ neuroscience-inspired primitives discover which mechanisms actually improve performance — and the results suggest surprising connections between biological computation and model robustness.

Explore the Work

See the benchmark results, read the research, or get your model enhanced.