You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Although SpecKit is a great idea, we've again slipped into the old way of thinking, where we expect programmers to do most of the grunt work and force programmers to learn yet another syntax (/speckit.constitution, /speckit.plan, etc.). Having to keep learning new syntax is mentally exhausting as one gets older and it gets frustrating because by the time you learn one syntax, somebody else creates a new trending syntax.
When vibe-coding with an LLM, what I've really wanted was an interface where I could first specify what I'm trying to build in plain english and then start telling the LLM to build components one by one (perhaps Michael wanted to convey something similar). The LLM should automatically create the code, build it and show me the generated GUI or the working behaviour of the code. It should automatically ask me if the look and feel is as per what I wanted. It should provide a layer over which I can draw or drag components using the mouse pointer, to tell the LLM what changes I want to the GUI and how it should behave. Such an ability to refine the output is what I see lacking in existing approaches.
This would require an LLM that is capable of "seeing" the GUI it generates and being capable of comprehending a sequence of events to match the program's behaviour with user expectation. It would require getting rid of the existing method of programming using an IDE.
Yes, I understand this would require a heck of a lot more tokens, but we need to see if this approach would consume fewer tokens overal, than with users struggling to specify things in English the way they do. Time is more valuable than tokens. If a more visual approach can get us to desired implementations faster, then it's worth it. When we are at the threshold of leaping forward to a whole new way of doing things, we really should consider brainstorming possibilities rather than slipping back into the trap of needing to memorize new syntax and doing grunt work, when we should actually make the computer do the work for us.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Although SpecKit is a great idea, we've again slipped into the old way of thinking, where we expect programmers to do most of the grunt work and force programmers to learn yet another syntax (
/speckit.constitution,/speckit.plan, etc.). Having to keep learning new syntax is mentally exhausting as one gets older and it gets frustrating because by the time you learn one syntax, somebody else creates a new trending syntax.When vibe-coding with an LLM, what I've really wanted was an interface where I could first specify what I'm trying to build in plain english and then start telling the LLM to build components one by one (perhaps Michael wanted to convey something similar). The LLM should automatically create the code, build it and show me the generated GUI or the working behaviour of the code. It should automatically ask me if the look and feel is as per what I wanted. It should provide a layer over which I can draw or drag components using the mouse pointer, to tell the LLM what changes I want to the GUI and how it should behave. Such an ability to refine the output is what I see lacking in existing approaches.
This would require an LLM that is capable of "seeing" the GUI it generates and being capable of comprehending a sequence of events to match the program's behaviour with user expectation. It would require getting rid of the existing method of programming using an IDE.
Yes, I understand this would require a heck of a lot more tokens, but we need to see if this approach would consume fewer tokens overal, than with users struggling to specify things in English the way they do. Time is more valuable than tokens. If a more visual approach can get us to desired implementations faster, then it's worth it. When we are at the threshold of leaping forward to a whole new way of doing things, we really should consider brainstorming possibilities rather than slipping back into the trap of needing to memorize new syntax and doing grunt work, when we should actually make the computer do the work for us.
Beta Was this translation helpful? Give feedback.
All reactions