The increasing rate of change
Technological change on its own is difficult enough to keep up with. But what we're really dealing with is an increasing rate of change, which frames the challenge somewhat differently.
As the implications of "large language models" and "simulated reasoning models" become better understood, the reality will dawn that it is not enough to frame things in a context of change in itself, rather it is the increasing rate of change that matters.
One implication of an increasing rate of change is that you increasingly must look out for what might be coming next. This helps mitigate such risks as for example that your investment into a current technological paradigm may turn out to have been badly timed.
For example, by going all in with an investment to the point that if something else new came down the road tomorrow, that it would make your previous investment obsolete.
Now clearly, it is next to impossible to see the future in terms of what might become the next "killer application". Or turn out to be a dead duck, as the case may be.
So what can be done?
Well recall that in the late sixties and early seventies, many small and medium companies, following the lead of the largest companies, were moving to what was then called "Electronic Data Processing" (EDP), with the advent of mini computers.
Understand, it was mini computers that made "EDP" ubiquitious.
In a similar vein, more recently, "cloud" data centres and servers, able to serve "Enterprise applications" at scale to small and medium sized businesses, did the exact same.
Now today we have LLM technology coming to dominate and perhaps entirely remake many organisation's functions, operational centres, and even whole industry sectors.
But looking at the major shifts above, was it not the speed of computing, the computational capacity, and information transmission capacity, delivered through the new type of computing centres that was the key enabler?
Similarly, powering these new trillion parameter LLM models are legions of a new type of incredibly powerful computer chip installed in datacentres dedicated to the purpose.
Therefore, I think that looking out for what might be coming next does not necessarily consist of trying to keep an eye out for the next so called "killer application".
Which should come as something of a relief, as hindsight is really the only judge of that.
Rather it may be more practical to keep an eye to computational capacity and information transmission capacity; also having consideration of the actual installation and deployment of the new datacentres that have these new type of processing power within them.