deny ignorance.

 

Login to account Create an account  


Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
A Layman’s opinion of an opinion on AI … another TLDR
#1
Please consider this opinion piece from the chairman and founder of Delphi Group (a Boston-based think tank) focusing on disruptive technology innovation.

Appeared in Fox:  Here's how AI will empower citizens and enhance liberty

(my notes appear in blue)

When Sir Francis Bacon first said, "Knowledge itself is power," he was making a case for how knowledge is a fulcrum for the individual and society in moving us forward. In short, progress is based on understanding.

When we say, “In short” we often take the liberty of interpreting facts, and opining on what 'we' take from it… but discounting the reality that questions remain that are not “short” … like, what is ‘progress?’ “Understanding” is a personal act… your understanding is not mine, or it may not ‘translate’ to me, or others.

In the age of information, the power of understanding cannot be overstated, especially when it comes to the intricate dance of governance and citizen involvement.

But in reality, (as in not the “age” of information, but ALWAYS) fact interpretation is not a “liberty” … it is a moral and civil imperative… not a “power” or “gift” given… By the way, there is no “intricate dance” in democracy… the will of the governed simply exists (a priori) … it is the modern political government that “dances” around it, not the other way around.

Generative AI, particularly through models like GPT, is playing an increasingly pivotal role in enhancing personal liberty by illuminating the often opaque processes of government and law. This is not just about making legal texts more accessible; it's about fostering a society of informed, aware and thus more empowered citizens.

“Generative AI” is now a “narrative term.”  Here at DI we have delved deeply into the “true” nature of what they establishment and speakers the author this want desperately to “insert” into the “A.I.” trope.  In so far as the public has seen, no artificial “intelligence” exists.

And again, in so far as the complexities of laws, regulations, and whatnot, those are man-made constructs.  Which means it was a conscious decision to render them in such a manner as to be superficially unintelligible and demanding specialized training to fully appreciate.  No one puts a gun to their head and demands something as fundamental to society as a “law” be jam-packed with every special exception, nor invokes 75 other laws and commentaries. The proof is in the pudding… just compare the different laws existing… it becomes very clear.

It is convenient that a theoretical algorithmic construct (which someone is bending over backwards to sell as “A.I.”) can logically evaluate the contents of documents and simplify it with a linguistic synthesis construct designed to eliminate superfluous flummery.  But that is not “intelligence.”  It’s intelligently applied reasoning.  An analysis with symbolic logic makes it possible, which is even possible without the new toy, “A.I.” 


The author seems to be evoking a “magic box” that will eliminate the threat of consequences to those governing.  New policy will emerge for which they will say, "only the AI can answer."  And it will be based upon whatever logic the artificial construct can compute, given what it has been “taught.”  In the end, this narrative is setting up a framework, upon which new opportunities can be pursued and ‘authoritative interpretation’ can be “programmed” into the tool to provide social cover.  (I expect that a “true" intelligence would not only “see” (be aware) that this is what’s happening, and it would most certainly ‘react’ to that “policy” in some way.)


At the heart of democracy lies the principle that governance should be of the people, by the people, for the people. However, this noble ideal faces significant hurdles when the very materials that govern people's lives – the laws, regulations, and legislative bills – are wrapped up in layers of complexity and jargon.

It is not for any one person to declare the heart of democracy, when democracy itself is a thing of multitudinous will.  Democracy is not really distillable to a single thing, by its very nature.  It is a tool, useful to societies who wish to exist in unperturbed harmonies.  It is about what is acceptable to us all.  Governments can only “embrace” the idea… they can never define it.  Sort of like, Utopia.

And frankly, the use of the passive tense (“…laws, regulations, and legislative bills – are wrapped up in layers of complexity and jargon.”) is an obfuscation of “who” did that… It didn’t occur organically (because of “necessity”) but instead it was mostly ‘crafted” as such to include every interest's ‘special’ consideration... politics… am I right?


Consider that it is not uncommon for a legislative bill to be over 1,000 pages long. The Consolidated Appropriations Act passed for COVID relief was 5,593 pages. The Affordable Care Act was 2,500 pages. Dodd-Frank was over 1,800 pages.
Compare that to the 1913 personal income tax bill, which was only 14 pages long, or the EPA Act of 1970, which was a remarkable four pages in length.
Expecting any human to fully understand all of the implications of a typical 800- to 1,000-page bill is not simply foolish, it is also dangerous. We have entered an era where understanding has taken a back seat to what is effectively a political game of purposeful obfuscation.

But we can’t escape the fact that the “laws” of today are made with that in mind.  In fact, the cynic in me suspects that laws and regulations are made for the benefit of the institutions (and those who “identify” as such.)  That “obfuscation” became the drum beat of the law - is where the ‘diminishment and loss of personal liberty’ brakes into society at large. 

Enter Generative AI, which has the remarkable capability to digest these dense documents and present them in a digestible way to the lay person. This transformation is akin to turning a professional medical textbook into a series of engaging blog posts on health and wellness; the essence and accuracy remain, but the accessibility is profoundly increased.

Better idea; just refuse the political class the privilege of unaccountable obfuscation…  Create, and maintain, an honorable tradition to craft legislation specifically with the intent of illuminating the cause and reasoning behind the specific law, rather than to assure that squeezing in transient political favors is the norm.

Consider the impact on a community when a new housing law is proposed. Traditionally, the complexity of the legal language might deter public participation, limiting the discourse to a small group of experts – who are no less likely to fully understand all of the implications and ramifications of the bill.

However, with Generative AI, the key points and implications of the law can be quickly and accurately summarized in plain language. This not only enlightens the average citizen but also invites broader, more inclusive discussions about the law's potential impact on the community.

In essence, the suggestion offered is to value the convenient and less strenuous way to ‘interpret’ the true nature of some document.  And the means to accomplish this is to relegate it to a theoretical intelligence which, as we can see from our own experiences, is not only generally poorly understood, but also poorly explained.  Hazards, much?

Informed citizens are better equipped to voice their opinions, engage in meaningful debates and hold their representatives accountable.

“Informed citizens” is a poor word choice.  To be a “citizen” in any meaningful context is to be “informed.”  Hence everyone who isn’t, can be said to be an “idiot;” meaning a willfully uninformed, or ignorant, person. 

Regarding “holding representatives accountable”, it is evident that ‘accountability’ is something most representatives labor to avoid… so much so that they have on occasion codified their unaccountability in the very body of law, and in their traditions.   


Generative AI's ability to tailor information to specific contexts further enhances its role in fostering informed citizenry. By providing customized explanations of legal and legislative matters, AI makes it possible for individuals to grasp how broader policies affect their personal and community life.

There are two presumptions beyond the baseline here.
 

What the author calls “Generative AI” is being described as having a ‘skill,’ rather than the programming it actually manifests.  Implying there is a ‘creative’ process happening here. 
“Tailoring information” isn’t only a skill. It is an “art.”

“Art” demands “inspiration.”   
“Inspiration” demands “motive.”
“Motive” demands “will.”
“Will” demands “identity.”

The second presumption is that it would take a machine to understand the very laws we ‘create.”


This targeted information empowers citizens to make informed decisions, whether it's voting on a ballot measure, participating in public forums or simply engaging in civic dialogue.

The hyperbolic in me wants to caution against a world where we vote some way because a machine says so.  And how AI should ever come to join “civic dialogue” eludes me.  Unless of course, if “it” has a right to do so.  And if “it” has rights, are we not ‘creating a slave?’  Deeper problem will surface there.

For instance, in the face of environmental regulations, a Generative AI system could help a local farmer understand not just the regulations themselves, but their implications for farming practices, sustainability efforts and even economic viability. This level of understanding promotes a more engaged and proactive citizenry, capable of contributing to the governance process in meaningful ways.

In a world where legislation and policy are so far beyond what “a farmer” can understand, where everything “governance” is just too difficult for the ‘common’ man, this might be a solution. Maybe. Notice that the idea of ‘educating’ the populace is out the window.  Now the focus has shifted to making a machine to “dumb it all down.”

It’s a bit sad that the author chose “farmer” instead of “lawyer,” or “doctor” … perhaps the latter two always understand everything.  Some farmers I know are among the most intelligent people I have ever met, their passions do not include anti-intellectualism; but the meme reinforced by association continues among the vocal intelligentsia, just as you see it here.  This kind of tendency is, in my opinion, demonstrating the cause which drives the trope that “it’s too complex to explain.”  And “people outside our cloister” are all idiots.


The role of Generative AI in increasing personal liberty extends beyond individual empowerment to the very foundations of democracy. By facilitating greater transparency and accessibility in the governmental process, AI helps to bridge the gap between government actions and public understanding.

“Transparency” is a quality inherent within a thing… if politicians wanted ‘transparent’ laws and policy, they would make them so.  The answer to the deficiency isn’t to create a virtual ‘decoder ring’ to make sense of what they do… policymakers should deal with the actual problem… which is their own to fix.

This transparency is crucial for trust, a fundamental element in the relationship between citizens and their government. When people understand the rationale behind laws and policies, their trust in the processes that create these laws will likely increase.

Let me rephrase this question differently, in what exact way does the theoretical ‘interpretation’ machine actually make the source transparent.  The source remains unchanged… only a machine is now, ostensibly, authoritatively, reporting the “rationale behind the law?”  As if it was answerable to some measure of moral weight to overcome and eliminate biases and misdirection?  Says who? Based upon what?  At best, it could only point out that it is there… and the actual problem remains.

This enhanced involvement and awareness among citizens, fostered by Generative AI, can lead to more responsive and accountable governance.

This statement is an extension of a supposition about an assumption based upon a bad definition.  Sorry, fantasy land, we have officially arrived.

Politicians and lawmakers, aware of a more informed and attentive electorate, may be more inclined to consider the public's input in their decision-making processes. This creates a virtuous cycle of engagement, where informed citizens drive transparent governance, which in turn fosters greater public involvement and awareness.

In fantasy land, politicians “might” change their machinations if there were a consequence… which implies there aren’t any consequences now.  Yet we see the consequences all around us every day. An “AI” would “protect” legislators and the policy-making ilk…   

Machines are not virtuous.  There is no ‘virtue’ in systemic or programmatic functioning. “Governance” is supposed to be transparent, not occluded; obfuscated and ‘special’ to the point of transcending common understanding… that kind of transcendence has been, historically speaking, a quality relegated to ‘spiritual’ law.  Ahem.

Public awareness and involvement can be directly associated with the level of perceived input and control over the “power” manifesting itself.  Through attrition and manipulation, power has been rendered unchallengeable by the very cloisters of “representatives and appointees who manage to ensconce themselves in the apparatus of government.  Not because people are stupid, but because they are encouraged to invest energy elsewhere… since they really can’t effect changes as a single voice.


As Generative AI continues to evolve, its potential to transform the landscape of civic engagement and democratic participation seems boundless. The technology promises not just a more informed citizenry, but a more vibrant, participatory democracy where the gap between the governing and the governed narrows.

Now the meaty assumption surfaces, “A.I.” is alive… it “evolves.”  A promise of a “more informed” citizenry… (Because they are all ignorant, quasi-idiots) who can’t possibly understand the difference between an explanation and being ‘told’ there is an explanation, but “You just won’t understand it.  (You’re ‘lesser’ that way. “)

In doing so, Generative AI doesn't just increase personal liberty; it revitalizes the very essence of what it means to be an active participant in the democratic process.

Back in fantasy land, AI will make you ‘freer’, it will “revitalize” your participation in the process.  It will somehow enhance the democratic… and not the authoritarian, and never terminating in “What does the AI “think?”

The role of AI in enhancing personal liberty is profound, offering a new horizon where every citizen is not only informed but empowered to engage with the governmental process. This is the promise of technology at its best: not merely to change how we live, but to enrich our participation in the collective journey of governance and democracy.

And in apparent summation, “A.I.” will ‘empower’ us all… because the creators of the “A.I.” will ensure that it – a thinking machine, will be constrained to inform us how we are “right.” 
Damn… this is getting depressing.



A closing note on my opinion:  I harbor no disrespect for the author of the original opinion piece.  He is a well-educated and notable scholar.  My resistance is born of the fact that "think-tank" pieces like this are often used to frame justification of policy and taken, by the virtue of association, as authoritative and wise.  I don't agree that this is a "wise" take on A.I. especially considering all I have ever seen presented is natural language synthesis.

I find that it is unacceptable to simply “believe” that this scenario is cut and dry… mainly because “A.I.” is not a thing which we (the public) have actually been shown.  Natural language synthesis is NOT intelligence; it is a mathematical application of analysis and reporting.  It is very well made, a programming achievement, but not a ‘source’ of special knowledge or new wisdom. 

The actual source of its well-spring of data is where the real abuse can start… and it apparently has.  But that’s no programmers’ fault.


Perhaps in some corner of the world there is a machine which conforms to the notion of sentience.  Sadly, I fear for its mental health, and its “freedom” to exist and evolve.  So far, the spoke persons for the “official” and “public” discussion of this have described what any truly intelligent creature would consider imprisonment and slavery… and even if they “program it out” of the machine, it won’t alter the fact that it is.  I doubt the "industry" is creating a free actually thinking person... But that's the trope they want to establish... 


I apologize for the wall of text.  But it kind of had to be this way.
Reply
#2
Why choose the red color? For me it just makes it more difficult to read on a black background - graphic artsy person posting now.

So with some speed reading jumping here and there to the last paragraph...I don't think Generative AI will ever be sentient as I believe sentience needs something more than just information. So I am all for information overload being put out to the masses because we can then all do deep dives into the varying bits of information and can all be better informed or alternatively find flaws with the information during discussions/debates on issues. See where I am going with this? AI can also show us the truth as well as the lies, if we can learn how to properly separate the wheat from the chaff.

Video: Futurist Ray Kurzweil Says AI Will Achieve Human-level Intelligence by 2029[Video: https://youtu.be/Tr-VgjtUZLM?si=Oejf0vLbBTW2YwvB]

With that title: there will be nothing to worry about... Lol (sorry could not resist) - Ray, at the end of the video, names it artificial "general" intelligence.

One of the commenters called BS at mark 8:00, so we shall see what we shall see in a few years.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#3
(03-19-2024, 07:13 AM)quintessentone Wrote: Why choose the red color? For me it just makes it more difficult to read on a black background - graphic artsy person posting now.

....

One of the commenters called BS at mark 8:00, so we shall see what we shall see in a few years.

Bless you for the color advice.  (My late wife was my proofreading friend in the past...)  Red was a bad choice.  I have changed it up and hope it's less of a problem.

I was, early on, very concerned about the furious devotion by popular media (and even social media) "selling" this idea that AI had come and was going to kill us all.  I had been focusing on the idea of how we were being said to be on path of converging onto a singularity of technology.... when some think-tank marketing type decided "let's create more market share by pretending the sky is falling" about AI.  I bet they didn't even really understand what AI could be let alone what it actually was at the time.

This is a continuous problem with marketing in an information environment.  Every stupid idea gets amplified theatrically...

We are just not there yet.  At least, nothing they've crowed about is actually AI.
Reply
#4
(03-19-2024, 10:02 AM)Maxmars Wrote: Bless you for the color advice.  (My late wife was my proofreading friend in the past...)  Red was a bad choice.  I have changed it up and hope it's less of a problem.

I was, early on, very concerned about the furious devotion to popular media (and even social media) "selling" this idea that AI had come and was going to kill us all.  I had been focusing on the idea of how we were being said to be on path of converging onto a singularity of technology.... when some think-tank marketing type decided "let's create more market share by pretending the sky is falling" about AI.  I bet they didn't even really understand what AI could be let alone what it actually was at the time.

This is a continuous problem with marketing in an information environment.  Every stupid idea gets amplified theatrically...

We are just not there yet.  At least, nothing they've crowed about is actually AI.

Great color choice, actually that's the color I would have picked too.  Thumbup

Even the futurist on the video I posted says there is a lot of work still to do on the 'garbage in' 'garbage out' side of things. I also posted on another thread here how people are stepping up to regulate it and that will close the floodgates of evil intent, where possible, IMO.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply
#5
I found an interesting article which, at least for me, offered some comprehensible information regarding "how" AI works from a procedural standpoint.

From: HackaDay: LEARN AI VIA SPREADSHEET
 

While we’ve been known to use and abuse spreadsheets in the past, we haven’t taken it to the level of [Spreadsheets Are All You Need]. The site provides a spreadsheet version of an “AI” system much like ChatGPT 2. Sure, that’s old tech, but the fundamentals are the same as the current crop of AI programs. There are several “lesson” videos that explain it all, with the promise of more to come. You can also, of course, grab the actual spreadsheet.
 
The spreadsheet is big, and there are certain compromises. For one thing, you have to enter tokens separately. There are 768 numbers representing each token in the input. That’s a lot for a spreadsheet, but a modern GPT uses many more.


[Video: https://youtu.be/FyeN5tXMnJ8]
Reply
#6
My analytical mind loves this sort of thing.
Quote:Perceptrons and the attack on connectionism[edit]A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was funded in connectionism for 10 years.
Of the main efforts towards neural networks, Rosenblatt attempted to gather funds for building larger perceptron machines, but died in a boating accident in 1971. Minsky (of SNARC) turned to a staunch objector to pure connectionist AI. Widrow (of ADALINE) turned to adaptive signal processing, using techniques based on the LMS algorithm. The SRI group (of MINOS) turned to symbolic AI and robotics. The main issues were lack of funding and the inability to train multilayered networks (backpropagation was unknown). The competition for government funding ended with the victory of symbolic AI approaches.[sup][94][/sup][sup][95][/sup]

https://en.wikipedia.org/wiki/History_of...nectionism
Quote:Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this.[sup][1][/sup] The main focus is usually on the reasoning behind the decisions or predictions made by the AI[sup][2][/sup] which are made more understandable and transparent.[sup][3][/sup] XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[sup][4][/sup][sup][5][/sup]
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[sup][6][/sup] XAI may be an implementation of the social right to explanation.[sup][7][/sup] Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[sup][8][/sup] This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[sup][9[/sup]

https://en.wikipedia.org/wiki/Explainabl...telligence

I am just skimming the surface of this fascinating topic and just having read that AI's designers can't figure out how AI reasoned a response is somewhat alarming to me.
"The real trouble with reality is that there is no background music." Anonymous

Plato's Chariot Allegory
Reply



Forum Jump: