The thought of "machines rebelling" most probably stem from people watching too much pop science fiction. It's not "rebel" but "badly programmed". An AI system will adhere to its programming. People make mistakes when programming... AI never "make mistakes" when they merely follow algorithm. Saying that "AIs make mistakes" instead of "programmers make mistakes" would be akin to saying that a gun shoots the wrong person when it's the person holding the gun that shoots the wrong person.
p.s. This isn't a gun debate. The issue at hand is individual culpability (i.e. "individual" as a conscious entity) instead of availability of objects to culpable individuals
i observed how the slow legalisation of marihuana
led to the practice being picked up by young urban professionals.
when marihuana was illegal, hippies and nature lovers praised marihuana.
now that the threat of arrest is all but gone
money focused people have embraced marihuana and totally abandoned the spirit of the subculture that made this possible for them.
at the sports park, people of alm ages gather to maintain fitness, drink beers, sit and socialize, perhaps philosophize and smoke marihuana.
the garbage cans are full of plastic beverage containers. pensioners and others collect the depositable containers.
i also do this.
but no one cares to unite to find a way to gather the non depositable containers for recycling.
i am not a loud or pushy person so my efforts to solve this were not extreme. i gave up.
i am however left with this viewpoint that is solid.
there is not much else to be said without delving into repetition or redundancy.
a two part closing. i am personally left with a puzzle on this quest as to how to get people to recycle- as i see it many people about my age i feel should feel an obligation and a debt to the hippy-environmental community that had brought marihuana into their lives during the time of prohibition. Thus i am puzzled by their eager acceptance of the freedom to enjoy marihuana while rejecting the values of those that made this possible for them.
finally i am left to view this phenomenon, not recycling by the new marihuana community majority, from various perspectives. Pride, personal, moral or religious, etc. A sense of obligation perhaps. And the negative capitalistic view of recycling, even at a park that they constantly enjoy, as a burden, a non-mandtatory choice rejected since it is not a form of employment.
(perhaps this post is phrased and worded rather simplistically, though i tried for an elegant simplicity, i must admit i am currently suffering a circumstance of most of my family's library having been boxed away into storage,)
and to leave us all with a quaint observation- i have recently begun to notice, misspellings, in books. All sorts, from major publishers, even encyclopedic type history books. that phenomenon gives me differing feelings. it makes me feel hopeful for my own endeavors, seeing errors where large amounts of effort was devoted. it also makes me question the facade of high class status symbols.
i bid you all a good time.
i hope this post will be received well, and that it doesn't upset anyone desiring an in depth look into the mechanics of thought complete with references to great minds of the ages.
no. just a simple look at a phenomenon, hopefully somewhat at least in line with the spirit of the ancient Greeks
i.e. behold we are people in civilization and this is happening, noting the similarities and differences of things that used to happen.
(if i had the books at my convenience, perhaps i could have felt safe in continuing on a few aspects)
The root cause of the issue is the mishandling of the Problem of Universals, along with the resultant shoehorning of logical and statistical solutions.
Logic cuts and divides everything in the world into distinct parts, down to supposed universal properties across different objects.
Let's see how this creates issues, starting with The Raven Paradox.
You see a bunch of apples that aren't black, and that somehow supposed to contribute to a perception that all ravens are black? To some that may seem to be a crock of nonsense because it starkly runs against intuition (what the heck does an apple's color have to do with those of ravens?), yet that's how learning is supposed to work according to Bayesian treatment of the issue. It's an equation, so that's how it's supposed to work... It's an obvious case of shoehorning. The solution doesn't sound sensical, and if theory runs against common sense then "so be it, because we as people must also work that way because there's no better solution"? Really? Sounds like a fallacy of paucity of imagination.
No, I think not. Forcing "off-topic" considerations in learning is simply forcing a bad solution onto the problem. The Bayesian statistical solution to the Problem of Universals is wrong.
What is the mistreatment here? It's the assumption that properties exist as universal objective properties. There are multiple things wrong with that assumption:
- "Color" is subjective. It doesn't exist "out there in the world" but entirely in your head https://www.extremetech.com/extreme/49028-color-is-subjective and as such it's a description of the subject and not the object
- How do we assure ourselves that a property attached to one kind of object can be identical to a property attached to another kind of object at all? Isn't that another assumption subject to debate? Let's say for the sake of argument that there is a black apple. How do we come to know that the "black" of black apples can be identical to that of the "black" of black ravens? One property supposedly arise from the skin of a fruit while the other from a collection of barbs of feathers. Isn't this a comparison between the colors "apple black" and "raven black" instead? How is that any different from comparing, say, apples with oranges?
A solution to Zeno's motion paradoxes is thus:
- There is persistence of memory of object "A" from perceived static position A1 until the next perceived static position A2, all the while the position of dynamic object "Ã" is unidentifiable
- There is perceived movement of A from A1 to A2
- Perception and conception of A1, A2... is quantized, progression of Ã is still continuous
- Thus, wherever the object is positioned logically (i.e. identified), it is already not at the logical position
Okay, now that we've talked about how machines don't learn, one may wonder how learning actually occur.
It's a psychological process, with mental impressions built over time involving both firsthand experiences and secondhand descriptions:
Then what are they doing when they appear to be learning?
Well, they hack. They hack the activity of learning and thinking by trying everything and failing through really fast, so fast that they appear to be apprehending the meaning of the activity instead of processing the arbitrarilly assigned symbols associated with it. If you look at what those machines do on each step, it becomes obvious what the activity they engage in actually is.
If Chinese Rooms are "rooms that appear to understand Chinese" then Learning Rooms are "rooms that appear to learn".
In case of "learning to identify pictures", machines are shown a coupla hundred thousand to millions of pictures of pretty much everything, and through lots of failures of seeing "gorilla" in bundles of "not gorilla" pixels to eventually correctly matching bunches of pixels on the screen to the term "gorilla"... except that it doesn't even do it that well all of the time
Needless to say that "increasing performance of identifying gorilla pixels" is hardly the same thing as "learning what a gorilla is"
Mitigating this dumb sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything.
It's no wonder Go masters are quitting. There's no point in trying to go up against that kind of dumb crap that flies at light speed
- G. Teubner, "Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law" (link)
This is a fascinating article.
Putting it into the context of the debates that used to rage here, a few things must be said:
- "Continental philosophy" is the primary philosophical reference point in German legal scholarship. Their whole legal system is built on that tradition. Grokking that, which took a great deal of time, really made be grateful to the exposure to continental theory here. Because now the EU, which is largely based on German legal theory, is leading the world in technology law.
- The two theorists listed here are the Great Names in continental philosophical theory as of 10-20 years ago. It's funny that we never talked about them here. In my view, Latour is terrible, and Luhmann is fantastic. But your mileage may vary.
- Latour got famous riding out the Science Wars on a social constructivist platform. This was done under the auspices of a social science research, specifically ethnographies of laboratories. It's very poorly positioned philosophically, in my opinion, but nevertheless became wildly popular. My best guess as to why was that it was what mediocre people think smart people sound like. He's changed his tune since and now he has so many positions he's hard to track.
- Luhmann is a 'system theorist' of social science, a student of Talcott Parsons who drew a lot from second-order cyberneticists Maturana and Varela, who are _amazing_. Luhmann got some recognition for his epic argument with Habermas, who is/was of course the culmination of the Frankfurt School field. Luhmann is, in many ways, the cybernetics/pragmatist/engineering-menta
The combination of the two is a bit unholy. But it's a novel approach to a significant practical problem that requires philosophical insight to address: how to deal with all the artificial 'agents' that are not really 'persons' per se.
Food for thought, in case anybody check this place any more. Looks like @nanikore is still around...
The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks.
I don't suppose some people would give these algorithms rights if they take on the appearance of language "comprehension"?
If gene expression pokes a gaping hole in those analogies, new scientific findings absolutely obliterate them:
Show me a program in which every segment of every line of code influences absolutely everything. Programs don't work that way, there's no such program, and genes don't work like programs... not even close.
Genes don't spell out a compartmentalized programming language. Genes aren't programming code.
Not only minds are gestalts; biological organisms are as well.
Let's put those abusive and broken analogies to pasture once and for all.
AI textbooks readily admit that the "learning" in "machine learning" isn't referring to learning in the usual sense of the word.
Yet, I've encountered many people, even those in the AI field, who conflate this specialized term with the usual sense of learning.
Start reading from end of page 4. Note that the word "experience" isn't used in the conventional sense, either.
For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word "learning," we will simply adopt our technical definition of the class of programs that improve through experience.
"Experience" isn't just data collection, either. Cue The Knowledge Argument.