Just to clarify, a large portion of the proposal was AI-generated, but not in a simplistic way. I didn’t just throw a prompt at an AI and let it do all the work. Before even starting with AI, I first made an entire structure for the document. Then, section by section, I collaborated with the AI to develop the proposal, refining it as we went. A big part of the process involved going back and forth—reading what the AI generated, noting what wasn’t right, figuring out what needed to change, and deciding which ideas to emphasize or better incorporate. The AI was a tool to help me organize and phrase things better, but the core ideas, research, and development are mine.
In that same way, I think DAO members could benefit from using AI to craft replies in a similar way. Writing down your ideas, using AI to help you organize and phrase them better—it saves time and makes for clearer communication. But if someone’s using AI just to spit out shallow replies, that’s not beneficial. AI should assist with writing, not replace the thought process behind it.
Adding Jupiter Horizon to the mobile app isn’t something I had considered yet, but it sounds like a fantastic idea! I’d love to explore that possibility, as I think integrating it into the app would help a lot of people navigate the ecosystem more easily and stay informed.
On the Apple Intelligence point, I get the concern, but even at its best, Apple’s AI will likely only reach ChatGPT’s level, which still isn’t on the level of what we’re building here. What we’re doing is tailored specifically to the needs of the DAO and the Jupiverse. Sure, in the far future, AI models may become so advanced that technologies like ours could be challenged, but I trust in mine and the team’s ability to adapt and pivot when that time comes. It’s all about staying ahead and keeping our solutions relevant as the landscape evolves.
Thanks for taking the time to give this feedback, Matt!
I totally agree with you that there’s an ethical argument to be had when it comes to AI usage on the platform, and I appreciate the thought you’ve put into outlining those concerns. The points you raise about AI misuse, particularly when it comes to fake engagement and hastily patched proposals, are spot on. That’s a major issue on the platform, and there needs to be more awareness around it to ensure we’re maintaining authenticity and integrity within the DAO.
While not all AI-generated work is inherently bad, the context and intention behind its use are key. When AI is used to support and streamline genuine contributions, it can be a powerful tool. However, when it’s used to manipulate or mislead people, it becomes a serious problem that needs to be addressed.
I don’t think you were targeting me or my proposal specifically, but since I’ve used AI to help craft my proposal and outline my thoughts, I hope that I haven’t mislead anyone in the process. My intention was to use AI to present my ideas more clearly and effectively, and I’ve made sure that all the concepts and research in the proposal come from me.
I think this is an important topic that deserves more attention across the platform. Maybe you could consider making a separate post specifically focused on AI misuse and the ethical implications within the DAO. It could be a great way to get the wider community involved in thinking of solutions or at least raising awareness around this. I’d definitely be interested in joining that discussion and helping to brainstorm ideas on how we can combat these issues or just be more vigilant moving forward.
I don’t think anyone has any particular issue with using AI to increase productivity as you’ve done. What you’ve done is pretty cool and your team are clearly passionate about this space and walking the walk.
The main issue for us as a DAO is not knowing whether or not we’re communicating with other human-beings. If I get to the end of the proposal and only afterwards find out that you didn’t actually write it personally… it’s not a great feeling tbh. I’d much rather know about it at the start.
That being said, I’m totally onboard having an AIWG within the DAO. If all you guys did was explore AI use cases for Jupiter based on new developments in the space (e.g. Replit Agent) and shared it with the DAO, you’ll make a name for yourself real fast.
If I’m correct you’re proposing that AI should be use in process of creating a topic? Because in the process of using AI for post or messages reply, it may be abuse and we’re going be reading long story or reply which maybe boring sometimes.
I think you have brilliantly captured the pros & cons of using AI as a tool for good or bad in the context of this debate. In previous debates I have argued that AI is a tool, just like any other. Anyone can use it to enhance their work, not replace their own thinking or creativity. It can help anyone be more efficient and informed, but the final decisions and judgments are always that of anyone who uses it. Once it’s used in this context it’s always a force for good.
Like you said it was absolute not targeting you, it was rather coincidental that this ramble happened here. Jupresearch is on focus to only present topics that are well… on topic of a topic . So not like we will really just open a topic on a general and broad theme of ethics on AI.
I maybe touched that part too briefly as my main concern was the dark side of it, but I can totally understand the positive side and often use AI myself to give basic structures to things. I would not use AI to elevate my rambles, but totally would use it to draft a leading topic because it often gives a more concise canvas and organized structure to things than my brainrot keyboardspamming flowthoughts would do.
I mean especially when a topic has a focus on using AI for reasons to improve and organize, what could be better than to use it to present that very advantage?
You are all good here bro, I really just took a jab at others, not present ones in here, to satisfy my own dark aura, hahaha.
I completely get that finding out something was AI-assisted after the fact might not feel great—definitely not my intention! I’ll add a note upfront making it clear that the proposal was written with AI assistance (It’s added to the To-Do list ) to be transparent moving forward.
Thanks for being onboard with the AIWG, it means a lot! I haven’t heard about Replit Agent, but I’ll definitely check it out. Exploring different AI use cases for Jupiter is something I’d love to dive into at some point, though it’s not our current priority for the AIWG.
Thanks again for your thoughts, really appreciate it!
Hey, I see what you’re getting at. To clarify, I’m not suggesting AI should be used to create every post or topic. In my opinion, AI should be used as a tool to help simplify complex information or make content clearer, not to generate long or boring replies.
It’s about enhancing quality, not quantity. Hope that clears it up!
I get why you wouldn’t start a separate topic just for this—it totally makes sense. AI really does come in handy for organizing scattered thoughts into something coherent! And yeah, a little ‘dark aura’ now and then keeps things interesting
Awesome, this is absolutely a great proposal that will help the daos members to make inform decisions based on their trustworthy instruction to the AI but my question is that the proposal mentions potential hallucinations by the AI, where it might provide inaccurate information. How will the AIWG address this issue and ensure users can trust the information provided by the AI tools? @LiamVDB
Thank you so much for your feedback and the thoughtful question!
You’re absolutely right—AI hallucinations are a known issue, and it’s something we take seriously. While the Technical Explanation section of the proposal goes into more depth on this, I realize it’s not as clear since I moved that section. I’ll make sure to add a reference to this section in the AI Hallucinations Note so it’s easier to find and understand.
To give you a brief overview: the AI will almost entirely rely on retrieved documents from trusted sources. If the AI can’t find relevant information, it will either refrain from answering or provide a disclaimer warning users that the response may contain hallucinations due to reliance on its base knowledge, which might be outdated or incomplete.
This is just the starting point for mitigating hallucinations. If we advance to the Trial Working Group phase, one of our key focuses will be on researching and implementing new techniques and frameworks, including ones to further minimize hallucinations and improve the AI’s accuracy.
Thanks again for your support and for raising this important point!
Thank you for testing! The clickable chatbot idea is definitely something we want to do, but we’re also planning to create a full chat interface like ChatGPT to drive deeper engagement. We haven’t emphasized multi-lingual capabilities in the proposal yet, but we will make sure to highlight this more.