AI Research
Model architecture, feature engineering, calibration design, and benchmark methodology.
The Collective is for people who can ship real work. Admission is based on quality, reviewed against a rubric, and finalized by committee.
Choose one primary track. Depth in one area scores better than shallow breadth.
Model architecture, feature engineering, calibration design, and benchmark methodology.
Threat models, exploit taxonomy, adversarial test design, and contract risk heuristics.
Data pipelines, model serving infrastructure, scoring APIs, and performance reliability work.
Exchange integrations, ecosystem onboarding, and institutional intelligence distribution.
Structured annotation workflows for exploit history, behavioral classes, and outcome ground truth.
Applications are scored on a fixed 100-point rubric before committee review.
| Dimension | Weight | What We Verify |
|---|---|---|
| Technical Depth | 35% | Demonstrated capability, shipped work, reproducibility, and technical rigor. |
| Protocol Relevance | 25% | Direct impact on core risk intelligence modules and near-term roadmap value. |
| Execution Reliability | 20% | Consistency, delivery record, and ability to operate within milestone constraints. |
| Ethics and Integrity | 10% | Research ethics, disclosure behavior, and alignment with risk language standards. |
| Network Leverage | 10% | Ability to bring strategic partners, data access, or distribution acceleration. |
Structured submission with evidence links, track selection, and expected contribution scope.
Independent reviewers score on rubric dimensions before identity context is revealed.
Governance committee validates scorecards and approves or rejects in a recorded vote.
Accepted contributors enter milestone-based workstreams with periodic performance checks.
After submit, applicants are redirected into the purchase page to complete their token purchase.