Editorial illustration for Depression‑detecting AI team rejects USD 50,000‑a‑week offer, opts to open‑source
Depression AI Goes Open Source, Bypasses Big Pharma Deal
Depression‑detecting AI team rejects USD 50,000‑a‑week offer, opts to open‑source
The team behind a fledgling depression‑detecting AI has spent months wrestling with the FDA’s approval process, a path that has proved anything but straightforward. Engineers and clinicians alike have been forced to balance a fragile cash flow against the mounting costs of clinical trials, data validation and regulatory paperwork. In that pressure cooker, outside investors have begun circling, pitching cash‑heavy deals that promise quick runway but demand deep equity stakes.
One such pitch, according to co‑founder Chang, floated a weekly payout of roughly $50,000 in exchange for $1 million worth of company shares. For a group whose mission is to keep the technology accessible to mental‑health providers, the offer raised immediate red flags. Instead of taking a short‑term lifeline that could compromise long‑term goals, the developers chose a different route—making most of their work openly available so the broader community can keep it moving forward.
---
Rather than accept "predatory" short-term offers to meet payroll...
Rather than accept "predatory" short-term offers to meet payroll -- Chang said one proposal offered around $50,000 a week in exchange for $1 million in equity -- the team decided to open-source most of its technology so others might continue the work. Open-sourcing a mental health screening model also raises concerns about misuse. Tools designed to flag signs of depression or anxiety could, in theory, be deployed outside clinical settings, such as by employers or insurers, without the safeguards typically required in healthcare.
Obviously that shouldn't happen, but once released publicly there is little to prevent the technology from being used in ways its creators did not intend. Nicholas Cummins, a senior lecturer in speech analysis and responsible AI in health at King's College London, told The Verge that open-source releases often lack the detailed "paper trail" regulators expect, including a clear record of how a model was trained, validated, and tested for safety. Without that, he said, bringing a product built on the technology through FDA approval could prove difficult.
Open-sourcing a mental health screening model also raises concerns about misuse.
After seven years of work, Kintsugi’s team chose to close the company rather than accept a $50,000‑a‑week cash infusion tied to a $1 million equity stake. The decision reflects a reluctance to trade long‑term credibility for short‑term survival. By releasing most of their depression‑detection algorithms as open‑source, the founders hope other researchers can pick up where they left off.
The code may also prove useful outside clinical settings, for example in spotting deep‑fake audio. FDA clearance, however, never materialized, and the reasons for the delay remain unclear. Without regulatory approval, the path to commercial deployment is uncertain, and it's not known whether open‑source contributions will bridge that gap.
Critics might argue that open‑sourcing a mental‑health screening tool raises privacy and liability questions that the startup has not addressed. Nonetheless, the move puts the underlying technology into the public domain, where its future impact will depend on external validation and responsible stewardship.
Further Reading
- Papers with Code - Latest NLP Research - Papers with Code
- Hugging Face Daily Papers - Hugging Face
- ArXiv CS.CL (Computation and Language) - ArXiv
Common Questions Answered
Why did the Kintsugi team reject a $50,000-a-week investment offer?
The team viewed the investment proposal as 'predatory', involving a $1 million equity stake that they felt would compromise their long-term vision and credibility. Instead of accepting short-term financial relief, they chose to open-source their depression-detecting AI technology to enable continued research and development.
What are the potential risks of open-sourcing a mental health screening AI model?
Open-sourcing a depression detection tool raises significant concerns about potential misuse, particularly by entities like employers or insurers who might inappropriately deploy the technology outside clinical settings. The technology could potentially be used to screen or discriminate against individuals without proper medical context or ethical safeguards.
How long did the Kintsugi team work on their depression-detecting AI before deciding to close the company?
The Kintsugi team dedicated seven years to developing their depression-detection AI technology, navigating complex FDA approval processes and clinical trials. After this extensive period of work, they ultimately chose to release their algorithms as open-source rather than accept what they perceived as unfavorable investment terms.