this README.md IS A TEMPORARY DRAFT until the publication of dialogoo white paper. Dialogoo is an open project, suggestions, contributions and collaborations are very wellcome.
Dialogoo was born as a natural response to digital imbalance under the critical assumption that as digital systems become more intelligent, digital imbalance increases some disruptive social risks. However, Dialogoo is open to any ethical AI initiative and it is ready for pivoting if the times require it. We are aware that the digital layer grows thicker every day. A digital intermediate layer is expanding between human connections, and the models or agents managing those connections are becoming increasingly powerful. We refuse to remain passive while this unfolds. So we think critically, take a position, and above alL, we act. That is what Dialogoo is about: We build with AI ethically, we don't remine in the thery, we take ETHICAL ACTION!
Some forecasts predict transformative AI within years, not decades, in this case we could be unprepared for the effects. We can’t wait for perfect research or policy. The risk isn’t distant superintelligence, it has already started with the slow erosion of human agency happening right now. 1st and inmediate line of action: We propose a fast action where physical and local networks are the safety net. So we build AI systems that contribute to strengthening and urgently building a strong physical layer before AGI and digital unbalance create a critical combo. 2nd line of action: Technical research about Safety layers. 3rd aproach: research and communication: we use data, real examples or red teaming examples to gain critical mass and make presure to decsion makers to act consequently.
We turn the algorithmic loop into a bridge toward real life by using AI not as an adictive form of interaction. AI is used to bring people back to people, pulling user from infinite scrolling and dopamine drain into a face to face socialization and public life. Returning data ownership to people and ensure its security to avoid deep manipulation.
Build: Applied projects. Think: Applied research, Comunicate: Analytics and storytelling focused on policy change.
Because it is too risky to rely on a digital network where control could be kidnaped. Because AI Safety begins where humans meet.
Use AI to reconnect, not to trap. Understand physical networks and comunities as a resilience layer for a healthy system. Keep humans, cultures, and communities at the center.
Build artificial companions. Optimize for engagement or addiction. Replace real relationships with simulation. Sell people's lives and intimacies in any data format. We build tools for balance, not distraction.
Dialogoo is an open umbrella for those who act. Builders. Researchers. Artists. Educators... Anyone who believes AI Safety needs to move quick.
A white paper is in progress,to ground these ideas in data, research, and methodology. Until then: prototype, connect, and share. The work is urgent. The invitation is open.
laiive is the fisrt project under the DIALOGOO umbrella, and some of the services that are behind laiive are being built project agnostic, so they can be used by other projects to expand DIALOGOO values.
UDO Recocommender System UDO stands for user data ownership, this means that no black box will manipulate users, users are aware of what they share or privately use to feed the recomender system, they can delete it, and change it at anytime.
ThubsDown is an automatic thumbs-up thumbs-down classifier for RLHF. Built under the assumptions: 1) users concentrated on the chat doesn't care so much about returning a feedback. 2) users that give feedback are per se a biased sample.