AI Strategy, throwing pasta against the wall?

Photo by JESHOOTS.COM on Unsplash

What exactly is going on with the hype around ChatGPT? Are we so impressionable that we suspend judgment on a whim and go for herd moves? What part of pre-trained model is magic? Didn’t we always know about garbage in, garbage out, couldn’t we thus infer the more data you feed to training, the more realistic-looking models would blurt out? Why the sudden craze about ChatGPT?

What is even corporate strategy?

One exercise I enjoy a lot goes like this. I create a mental model of a a given company, usually an influential player in the tech industry. I then come up with a number of hypotheses about the strategy they would be having, something of a fantasy as an exercise. I then start learning about the company’s moves and public strategy statements over time, not too seriously as that would be too much work. Whenever such company would pull a move that surprises me, I would go back to my hypotheses to try and understand what might be happening. I find it entertaining, sometimes I can learn from it. I’ve even come to the hypothesis that tech corporate strategy could be synthesised into abstract models not far off from the reality. To be fair, this is just a game, I am sure that companies are doing serious work and I have much respect for other people’s work.

Consider recent industry trends and how companies are responding to them. You could be forgiven if you imagined that some were improvising rather than having a clear strategy. Let’s pick a few hot buzzwords, Crypto, Web3, Metaverse, AI. Over the past few years, many big players have come up with vision statements, substantial investment announcements, and products and services announcements in these areas. Sometimes, you would read about expensive acquisitions in the very domains that companies previously said they were already very focused on. How does that happen? Were earlier efforts not good enough, talent and resources not up to the task, organic growth land grabs at hand, or just attempts to nip emerging competition in the bud? Where does this leave the erstwhile strategy? It’s hard to tell from outside.

Blue Ocean. Red Ocean.

Once, particularly in the earlier days of the Internet boom, strategic consultancy talked a lot about blue ocean vs. red ocean. The theory roughly suggests, I’m over simplifying just for this post, that Blue Ocean representing a relatively unexplored area is ripe for growth and profits whereas Red Ocean would be overcrowded and harder to survive in. Blue Ocean suggested being an early mover in a relatively untapped area, companies could just create such spaces for themselves to thrive in. Arguably Blue Ocean strategy is much harder to foster in contemporary tech industry, if you consider the way all the players seem to be fighting it off in every domain possible and how low the the barrier to entry has become. Sometimes you wonder if even a decoy product or service, given enough hype, wouldn’t trigger an arms race. Is this due to the fear of missing out, FOMO? Is it that, as Marc Andreessen once wrote, software is eating the world, anything that touches software is an existential threat or potential bonanza for every sizeable tech player? Can a large company really thrive in every tech domain that emerge no matter what its strengths and assets? Whenever the leader of a large concern would come out touting a given tech, what does that say to the crews whose entire careers are vested in other possibly competing or overlapping areas, couldn’t that cause distraction and incidental loss of focus in some parts? Is it no longer possible to carve out and nurture a niche strategy?

Have we learned anything? Do we actually learn?

Haven’t we seen this all before? Does every company need its own ChatGPT play? Have we already forgotten all we’ve learned about caution with AI, ethical AI, the challenges posed by deepfake, how malice and misinformation are thriving and where that could lead us if unfettered? Do we really forget learning so quickly? If we are serious about the challenges posed by cybersecurity, deepfakes, misinformation and the likes, then we are probably going to be more careful weaving barely proven tech into every fabric of tech consumption. The very large corporations have the resources to put up adequate guardrails to make sure their products and services, time will tell if they are also deploying resources accordingly – to be honest, ChatGPT appears to be rushed into some products much too fast for comfort, we shall see how that goes. This is where, having a clear and adaptable strategy could pay off. The smaller and innumerable tiny players could possibly represent danger areas since they have less resources and might not be prepared to take appropriate precautions with new tech. There are already reports of schools banning the use of ChatGPT or even rescinding remote assignment offerings due to it. When tech makes it easy to fake things while blurring the boundaries between fake and real, whole legions of people and professions are bound to face even more challenges. It feels like we are resetting the learning, tabula rasa without knowing what comes next. Imagine the CISO, or the team of CISSP, at the very top of the security game, having prepared their company to much toil and tear, everyone has done a tremendous job. What about the AI part now that ChatGPT is upon us? Well, I am afraid it’s time to do it all over again, there’s probably not even a name for the kind of education and curriculum or even threat models that are now needed to deal with the AI ChatGPT onslaught.

How does a company devise a winning AI Strategy?

I might have come up with a provocative question here. Of course I have some hypotheses, who doesn’t? To paraphrase U.S. singer Kellis, I can teach it but I’d have to charge. Imagine this, say you know nothing about a given knowledge domain, I take neuroscience for example (I don’t know a thing about it), imagine listening to experts debating concepts in specialist terminology, how could you possibly make sense of anything being said? Aren’t we going to create countless such situations if we would pepper lots of tech with ChatGPT? If it is already hard to trust what you read or hear out there, it’s going to get a lot harder now. Have folks considered ways to undo ChatGPT things, how is that going to play out?

Sit. Crawl. Walk. Run.

When strategy could be turned on its head seemingly overnight, for a large concern, then strategy execution could be in turmoil and have impact. Large ships can’t just be turned around as nimbly as say small boats could, obviously. Devising a winning AI Strategy is necessarily going to be iterative and incremental, it’s about learning! I’m not going to put up a recipe here, instead, I am going to argue a discovery journey with a healthy amount of failure to be expected. Empirically some babies go straight from sitting to walking, skipping the crawling phase, can that be said of AI tech as well? If we are going to simply deploy models that appear somewhat promising, isn’t that like attempting something akin to herd immunity, hoping for the best? Is it a good thing to experiment at such a large uncontrolled scale? Or aren’t we perhaps overblowing the opportunity cost of delaying the deployment just a bit? – I’m going to exclude Google from the last question, they’ve managed a nice escape with Kubernetes and are probably hoping to repeat a similar feat with something to fight ChatGPT, an entirely different game though.

I’m very much a tech advocate, the potential for augmenting human knowledge and ability is only limited by our imagination. It is fun to play and experiment with new AI technology. In this context, it is probably fine that companies would conduct controlled experiments and keep iterating until eventual solutions would emerge with well understood impact and boundaries. It’s fascinating that we, humans, could build something and then promptly start to imagine that being somehow magical. That’s like totally making up some story then believing that it is real, you just made it up didn’t you? Strategy too can emerge from experimenting, it may simply be lazy to rope that into market dynamics and call it the day. We have to work harder than that.

A nice quote that is still relevant today.

Computers are useless. They can only give you answers.

Pablo Picasso (allegedly, I’ve not seen any original/official document).

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.