Why ChatGPT won't ever be a #gpt

One of the challenges of being a #polymath is that few hiring organisations are structured — or have the tools, even — to appreciate, value and take virtues from the nature they assume is clearly not there: focus.

https://gb2earth.com/mil

The focus of a #polymath is NOT all over the place: or not as you assume when you hear this phrase trotted out. The focus of a person who can be seen in this way is actually summarily sharp: as sharp as hot knife through butter ever proverbially was.

And yes, it's a focus which IS all over the place: but in a wholly positive and utterly astounding way.

It's the difference between the #onebestway of those ever-so-simplifying traditional #startup ecosystems — you can't solve a problem unless you can simplify it to an elevator pitch — compared to those multiple bites at the proverbial apple which I propose in the #complexifyme series of projects and workstreams.

In the case of a #polymath, then, we ONLY solve problems — only WANT to solve them — when we have a full understanding not of one part simplified to the max but of all parts connected to the max.

Now: a #gpt — NOT #ai even mainly, but what are more rightfully known as #generalpurposetechnologies ... things like email & voicemail, and the wheel from ancient times, and the printing-press from #gutenberg onwards, and modern operating systems multiple — need the skills of a #polymath in both their initial creation and their ongoing maintenance & management.

What they get is something quite different: quite inappropriate.

Firstly, what they get is a mentality of focus to the simplifying max:

1. See the journey.

2. Identify where a single pain hurts the customer most: why? Where the person paying is most likely to want to pay first.

3. Solve that pain and problem, ASAP — no matter if we may not have all the relevant data to hand.

4. Move onto the next in line.

This means we engineer dependencies — that is to say, vulnerabilities — into all our traditional #startup ecosystem products & services from the very beginning. We create using a method that simplifies, so that we actually deliver #complexproblems-in-waiting.

And then we get to the second, really huge, challenge. This is immediately consequential of the first:

1. We cannot solve the vulnerabilities and inefficiencies of such #complexproblems-in-waiting, created with simplifying tools in the first place, by continuing to use simplifying tools.

2. Ergo, we are using in ALL #startup — even potentially for the simplest of software applications — quite the right tools to invoice the customer ASAP, and quite the wrong tools to:

a) identify accurately the truths of such workplaces, so we can reduce reworking waste to a minimum in the software engineering part of the development process itself; and

b) follow up, with services of vulnerability-proof maintenance & management, the oversight of the #complexsystems we have developed using, essentially, not-fit-for-purpose processes that can only simplify. And not only this: actually they have their very own specific, additional vulnerabilities associated with them too.

Imagine, then, when you strive to take a technology such as #generativeai — a philosophy of a supposedly #complexsystem built on the massive multiplication up of very discrete and basic component parts — and make it a #generalpurposetechnology, useful in all areas of human endeavour.

It's clear with all I've written today why it's all just become such a terrible mess. This #complexsystem built out of multiple basic components and thus dependencies — and therefore, also, equally, vulnerabilities of the strategic worst— is much more unfocussed, in the event, on the problems to hand — that is, definitely NOT a general-purpose anything — than a human #polymath can ever have been accused of having delivered on. And yet we continue to confine most #polymaths to relatively menial tasks exactly because of their alleged inability to see the whole picture — the forest in its businesslike utility because of the trees in their distracting huggability.

When, in truth, it's precisely they who not only see this picture but also, easily, connect its multiple disparate elements, in satisfyingly safe ways that remove — utterly entirely — all-over-the-place dynamics from the frame.

Conclusion? If we want in the future to create any robust, vulnerability-free #gpt — that is, a #generalpurposetechnology useful in all areas of human activity — we DON'T continue to employ traditional #startup #lean to do it; we DON'T discard the #neurodiverse thinking that #polymaths exhibit with such ease; and, once these systems are finally set up, we MUSTN'T proceed to use the same processes and tools for ongoing maintenance & management of the software we've created, as we might still have done, even so, for their creation in the first place.

____________________

Further reading:

https://gb2earth.com/newlean

http://complexify.me / https://www.sverige2.earth/complexify

https://gb2earth.com/climate

https://gb2earth.com/invest-example

Previous
Previous

The domino theory of #neurodiversity / Those who would remain society's voiceover for the foreseeable

Next
Next

Augmented Intuition: its history