The launch last week of Github Spark by the Microsoft-owned company, was a low-key affair, running under the radar of most of the mainstream media. However, it may just turn out to mark the start of a revolution in how we consumers interact with technology in the future, especially on our phones.
Github Spark is a new AI-powered platform that promises to give anyone the ability to create their own ‘micro app’ on demand. These apps, quaintly called ‘Sparks’ by the marketing peeps, will be produced almost instantly, tailored to our needs at the time and be available to use across different platforms.
No need to download an app from a store, or wait for someone to laboriously craft a new bit of software. You need a travel app for next month’s vacation, type or speak your request to the AI explaining what you need, and before you can say soggy lasagne it’ll be sitting on your phone.
Yes, it sounds ridiculous now, but Spark is proof that the big boys are not kidding about this upcoming revolution. This is not a scrappy backstreet startup, this is Microsoft, the biggest software company in the world.
What makes Spark different?
GitHub Spark allows you to share your 'sparks' with others, and control whether they get read-only or read-write permissions, Microsoft explained. They are then able to choose whether to use or remix it to adapt it to their own preferences. This is similar to the way Anthropic manages Claude's Artifacts but on a bigger scale
Eight months ago I wrote a piece here on Tom’s Guide outlining some of the ways that AI would impact and upend our current way of life and working. I touched on software production, and how services like Devin were already changing how software is being made.
The implication was that instead of passively consuming our applications, we would start to become our own prosumers - creating what we needed when we needed it.
Github Spark is the start of that potential revolution. And it’s arrived astonishingly early according to my imagined timeline.
Eight months is a lifetime in AI, and the advances we’re now seeing every single day are unbelievable, even in that short space of time. Before I drift off to fanboy-land, it’s not all roses and binary perfume. There’s just a whiff of the runaway train about all of this at times.
What are the implications?
Can we really sustain this pace of progress, when hundreds of thousands of people are being laid off in the technology sector alone? What does this mean to the world of education, what will our children face in the coming years? What on earth is going on here?
Right now it's all conjecture and imagination. But we need to understand that, despite the naysayers, this stuff is happening right here, right now.
For the past few months, I’ve been playing around with the growing number of tools that can deliver on the instant app promise. And I’ve been astounded by how fast they’re improving and how capable they’re becoming.
It’s basically all down to the improving quality of the underlying models. As long as companies like OpenAI, Anthropic, Google and Meta keep producing step-change improvements in the core technology, everything else will follow.
So far, in a mere few months, I have succeeded in creating around a dozen micro apps for my own personal use. I am not a programmer of any kind. I am, at best, a cut-and-paste bodge artist with ambition.
Yet despite that clearly defined handicap, I have been able to make myself my own chatbot wrapper, a simple audio mastering tool based on an established open-source project, and a music jukebox that can deliver my music to me anywhere via Amazon’s S3 storage service (hint: it’s ugly as sin, but it works).
Most of these took just a few hours to make at most. My latest venture is to create a small ride-sharing app for my yoga group to use to share cars to events. I spent four hours on it yesterday, and it’s tantalizingly close to the finish line. Close but no banana, because there’s a major bug the AI can’t seem to fix.
And this is the point. Right now we’re at the stage where these AI app-making tools like Bolt.new, CodeCompanion, and Marbalism are massively reliant on their underlying models. This means they all suffer from the current problem with models running out of context ‘steam’ towards the end of a coding session. Which introduces errors, loops and general frustration.
But please remember, these tools are literally doing every single bit of the coding and design to create the app. We users do nothing more than write an initial request, and then tweak the results in plain ordinary English.
What comes next?
Now imagine this fledgling AI technology growing over the next 10 to 12 months, aided by megacorp funding and support for tools like Github Spark. Why would I want to log on to a confusing and often expensive app store from Google or Apple to search for a solution to my problem?
Why rely on some distant company to produce, support, and enhance my lifestyle apps, when I can create something myself in minutes? Why get locked into a proprietary and exploitative ecosystem when I can roll my own solutions?
Remember, there was a time when we scoffed at the idea of curating our own personal choice of evening entertainment. What could possibly replace the sophistication of television and the movies? Well, we now know. Netflix and Playstations.
Now if you don’t mind, I’ve got to get back to wrestling with this ride-share app. These tussles will soon be a thing of the past, but right now I’m going to have to be extra nice to Anthropic’s Claude Sonnet 3.5 model and coax some kind of out-of-the-box thinking to get around this bug. Now what was the special prompt? Oh yes, “please can you take a look at this code and fix it.” There, that should do the job.