Introducing Deep Softworks
Announcements
Are we interfacing with AI the right way? Naturally, Large Language Models as next token predictors find their most intuitive place in a linear chat between human and machine.
Upon the public release of GPT-2 three years ago, the chat interface quickly cemented itself as the default way we “talk” to AI.
This remains the default choice for developers to this day because it is in fact a familiar, accessible paradigm.
Many wildly successful "chat bot" products have been built since, and billions of dollars have followed. In that sense, the chat interface worked.
However, history has taught us novel tech is seldom afforded its final form at the outset.
Early interfaces tend to mirror what we already know rather than what the technology ultimately enables.
In the early days of the iPhone app, desktop metaphors were simply ported over to iOS before morphing into a form that was fit for the mobile device. I argue the same is true for AI.
This is because the current chat paradigm asks the user to take an additional step, namely, to write a prompt. This act requires allocation of cognitive resources, away from writing, deciding, organizing, referencing, thinking. The antithesis of the utilitarian promise of AI.
This idea of cognitive siphoning gave rise to the goal of Deep Softworks: build software as infrastructure for thought.
Connect human and artificial intelligence seamlessly and without friction.
What is the final, invisible, ubiquitous form of AI in software? How does the lay man interface with it?
When these questions are answered well, the interface becomes lighter. AI fades into the background. Thought takes center stage.
See how this philosophy is applied in Rawa, our AI autocomplete for writing, and continue with Why Invisible AI is Up Next.