AI Research
Model architecture, feature engineering, calibration design, and benchmark methodology.
The Collective is for people who can ship real work. Admission is based on quality, reviewed against a rubric, and finalized by committee.
Choose one primary track. Depth in one area scores better than shallow breadth.
Model architecture, feature engineering, calibration design, and benchmark methodology.
Threat models, exploit taxonomy, adversarial test design, and contract risk heuristics.
Data pipelines, model serving infra, scoring APIs, and performance reliability work.
Exchange integrations, ecosystem onboarding, and institutional intelligence distribution.
Structured annotation workflows for exploit history, behavioral classes, and outcome ground truth.
Applications are scored on a fixed 100-point rubric before committee review.
| Dimension | Weight | What We Verify |
|---|---|---|
| Technical Depth | 35% | Demonstrated capability, shipped work, reproducibility, and technical rigor. |
| Protocol Relevance | 25% | Direct impact on core risk intelligence modules and near-term roadmap value. |
| Execution Reliability | 20% | Consistency, delivery record, and ability to operate within milestone constraints. |
| Ethics and Integrity | 10% | Research ethics, disclosure behavior, and alignment with risk language standards. |
| Network Leverage | 10% | Ability to bring strategic partners, data access, or distribution acceleration. |
Minimum Pass Threshold: 72/100
Allocation is tied to verified contribution impact, not passive participation.
Initial allocation bands map to rubric tiers and projected contribution value.
Strict maximum per accepted contributor to prevent concentration and whale dominance.
Progress-based unlock schedule linked to verified protocol delivery and governance compliance.
Structured intake for contributor qualification. Scores below are provisional and finalized only after blind reviewer scoring.