i am bored

Assignment of culpability to machines is absurd


The thought of "machines rebelling" most probably stem from people watching too much pop science fiction. It's not "rebel" but "badly programmed". An AI system will adhere to its programming. People make mistakes when programming... AI never "make mistakes" when they merely follow algorithm. Saying that "AIs make mistakes" instead of  "programmers make mistakes" would be akin to saying that a gun shoots the wrong person when it's the person holding the gun that shoots the wrong person.


p.s. This isn't a gun debate. The issue at hand is individual culpability (i.e. "individual" as a conscious entity) instead of availability of objects to culpable individuals

ancient greek philosophers wrote their observations of phenomena in nature and human society

people dont want to recycle.
i observed how the slow legalisation of marihuana
led to the practice being picked up by young urban professionals.
when marihuana was illegal, hippies and nature lovers praised marihuana.
now that the threat of arrest is all but gone
money focused people have embraced marihuana and totally abandoned the spirit of the subculture that made this possible for them.
at the sports park, people of alm ages gather to maintain fitness, drink beers, sit and socialize, perhaps philosophize and smoke marihuana.
the garbage cans are full of plastic beverage containers. pensioners and others collect the depositable containers.
i also do this.
but no one cares to unite to find a way to gather the non depositable containers for recycling.
i am not a loud or pushy person so my efforts to solve this were not extreme. i gave up.
i am however left with this viewpoint that is solid.

there is not much else to be said without delving into repetition or redundancy.

a two part closing. i am personally left with a puzzle on this quest as to how to get people to recycle- as i see it many people about my age i feel should feel an obligation and a debt to the hippy-environmental community that had brought marihuana into their lives during the time of prohibition. Thus i am puzzled by their eager acceptance of the freedom to enjoy marihuana while rejecting the values of those that made this possible for them.

finally i am left to view this phenomenon, not recycling by the new marihuana community majority, from various perspectives. Pride, personal, moral or religious, etc. A sense of obligation perhaps. And the negative capitalistic view of recycling, even at a park that they constantly enjoy, as a burden, a non-mandtatory choice rejected since it is not a form of employment.

(perhaps this post is phrased and worded rather simplistically, though i tried for an elegant simplicity, i must admit i am currently suffering a circumstance of most of my family's library having been boxed away into storage,)

and to leave us all with a quaint observation- i have recently begun to notice, misspellings, in books. All sorts, from major publishers, even encyclopedic type history books. that phenomenon gives me differing feelings. it makes me feel hopeful for my own endeavors, seeing errors where large amounts of effort was devoted. it also makes me question the facade of high class status symbols.

i bid you all a good time.
i hope this post will be received well, and that it doesn't upset anyone desiring an in depth look into the mechanics of thought complete with references to great minds of the ages.
no. just a simple look at a phenomenon, hopefully somewhat at least in line with the spirit of the ancient Greeks

i.e. behold we are people in civilization and this is happening, noting the similarities and differences of things that used to happen.

(if i had the books at my convenience, perhaps i could have felt safe in continuing on a few aspects)

peace
i am bored

Machines don't learn part II: maladaptive logical and statistical application

Why do machines "learn" in maladaptive ways?

The root cause of the issue is the mishandling of the Problem of Universals, along with the resultant shoehorning of logical and statistical solutions.

Logic cuts and divides everything in the world into distinct parts, down to supposed universal properties across different objects.

Let's see how this creates issues, starting with The Raven Paradox.

You see a bunch of apples that aren't black, and that somehow supposed to contribute to a perception that all ravens are black? To some that may seem to be a crock of nonsense because it starkly runs against intuition (what the heck does an apple's color have to do with those of ravens?), yet that's how learning is supposed to work according to Bayesian treatment of the issue. It's an equation, so that's how it's supposed to work... It's an obvious case of shoehorning. The solution doesn't sound sensical, and if theory runs against common sense then "so be it, because we as people must also work that way because there's no better solution"? Really? Sounds like a fallacy of paucity of imagination.

No, I think not. Forcing "off-topic" considerations in learning is simply forcing a bad solution onto the problem. The Bayesian statistical solution to the Problem of Universals is wrong.

What is the mistreatment here? It's the assumption that properties exist as universal objective properties. There are multiple things wrong with that assumption:

  • "Color" is subjective. It doesn't exist "out there in the world" but entirely in your head https://www.extremetech.com/extreme/49028-color-is-subjective and as such it's a description of the subject and not the object

  • How do we assure ourselves that a property attached to one kind of object can be identical to a property attached to another kind of object at all? Isn't that another assumption subject to debate? Let's say for the sake of argument that there is a black apple. How do we come to know that the "black" of black apples can be identical to that of the "black" of black ravens? One property supposedly arise from the skin of a fruit while the other from a collection of barbs of feathers. Isn't this a comparison between the colors "apple black" and "raven black" instead? How is that any different from comparing, say, apples with oranges?

Logical paradoxes arise from the conception of the world as a fragmented entity, starting with logical reidentification: https://philosophy.livejournal.com/623685.html

A solution to Zeno's motion paradoxes is thus:

  1. There is persistence of memory of object "A" from perceived static position A1 until the next perceived static position A2, all the while the position of dynamic object "Ã" is unidentifiable

  2. There is perceived movement of A from A1 to A2

  3. Perception and conception of A1, A2... is quantized, progression of à is still continuous

  4. Thus, wherever the object is positioned logically (i.e. identified), it is already not at the logical position

Fragmenting the perceptual / conceptual property of objects (position, "color", etc) and making them somehow able to be identified apart from the objects themselves is the process of de-coupling any context from knowledge. Is it any wonder that machines can easily mistake gorilla faces for human ones, and a photo of a crashing plane for one that's parked?

Okay, now that we've talked about how machines don't learn, one may wonder how learning actually occur.

It's a psychological process, with mental impressions built over time involving both firsthand experiences and secondhand descriptions:
https://philosophy.livejournal.com/1382079.html
i am bored

Machines don't learn jack squat. Machines hack.

Machines don't learn, as I've previously mentioned.
https://philosophy.livejournal.com/2045332.html

Then what are they doing when they appear to be learning?

Well, they hack. They hack the activity of learning and thinking by trying everything and failing through really fast, so fast that they appear to be apprehending the meaning of the activity instead of processing the arbitrarilly assigned symbols associated with it. If you look at what those machines do on each step, it becomes obvious what the activity they engage in actually is.

If Chinese Rooms are "rooms that appear to understand Chinese" then Learning Rooms are "rooms that appear to learn".

https://www.alphr.com/artificial-intelligence/1008697/ai-learns-to-cheat-at-qbert-in-a-way-no-human-has-ever-done-before

In case of "learning to identify pictures", machines are shown a coupla hundred thousand to millions of pictures of pretty much everything, and through lots of failures of seeing "gorilla" in bundles of "not gorilla" pixels to eventually correctly matching bunches of pixels on the screen to the term "gorilla"... except that it doesn't even do it that well all of the time

https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai

Needless to say that "increasing performance of identifying gorilla pixels" is hardly the same thing as "learning what a gorilla is"

Mitigating this dumb sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything.

https://medium.com/@harshitsikchi/towards-safe-reinforcement-learning-88b7caa5702e

It's no wonder Go masters are quitting. There's no point in trying to go up against that kind of dumb crap that flies at light speed
https://www.bbc.com/news/technology-50573071
helios

Teubner, using Latour and Luhmann, to elucidate the legal theory of non-human agents

Personification of non-humans is best understood as a strategy of dealing with the uncertainty about the identity of the other, which moves the attribution scheme from causation to double contingency and opens the space for presupposing the others' self-referentiality. But there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems, as Luhmann argues. Personifying other non-humans is a social reality today and a political necessity for the future. The admission of actors does not take place, as Latour suggests, into one and only one collective. Rather, the properties of new actors differ extremely according to the multiplicity of different sites of the political ecology.

- G. Teubner, "Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law" (link)

This is a fascinating article.

Putting it into the context of the debates that used to rage here, a few things must be said:

- "Continental philosophy" is the primary philosophical reference point in German legal scholarship. Their whole legal system is built on that tradition. Grokking that, which took a great deal of time, really made be grateful to the exposure to continental theory here. Because now the EU, which is largely based on German legal theory, is leading the world in technology law.

- The two theorists listed here are the Great Names in continental philosophical theory as of 10-20 years ago. It's funny that we never talked about them here. In my view, Latour is terrible, and Luhmann is fantastic. But your mileage may vary.

- Latour got famous riding out the Science Wars on a social constructivist platform. This was done under the auspices of a social science research, specifically ethnographies of laboratories. It's very poorly positioned philosophically, in my opinion, but nevertheless became wildly popular. My best guess as to why was that it was what mediocre people think smart people sound like. He's changed his tune since and now he has so many positions he's hard to track.

- Luhmann is a 'system theorist' of social science, a student of Talcott Parsons who drew a lot from second-order cyberneticists Maturana and Varela, who are _amazing_. Luhmann got some recognition for his epic argument with Habermas, who is/was of course the culmination of the Frankfurt School field. Luhmann is, in many ways, the cybernetics/pragmatist/engineering-mentality as reflected back into high German social theory. And even in Germany, he largely won the argument--German scholars you meet are far more likely to speak of their Luhmann studies than their Habermas studies.

The combination of the two is a bit unholy. But it's a novel approach to a significant practical problem that requires philosophical insight to address: how to deal with all the artificial 'agents' that are not really 'persons' per se.

Food for thought, in case anybody check this place any more. Looks like @nanikore is still around...
  • Current Mood
    pedantic
i am bored

This is even better than a Chinese Room.

A general "Language Room".

https://www.technologyreview.com/s/612960/an-ai-tool-auto-generates-fake-news-bogus-tweets-and-plenty-of-gibberish/

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks.


I don't suppose some people would give these algorithms rights if they take on the appearance of language "comprehension"?
i am bored

Genes aren't programming code

I've had the displeasure of reading various genes-as-programming-code and genes-as-programming analogies.

If gene expression pokes a gaping hole in those analogies, new scientific findings absolutely obliterate them:

https://www.quantamagazine.org/omnigenic-model-suggests-that-all-genes-affect-every-complex-trait-20180620/

Show me a program in which every segment of every line of code influences absolutely everything. Programs don't work that way, there's no such program, and genes don't work like programs... not even close.

Genes don't spell out a compartmentalized programming language. Genes aren't programming code.

Not only minds are gestalts; biological organisms are as well.

Let's put those abusive and broken analogies to pasture once and for all.
i am bored

The following is my restatement of Searle's rejoinder to System Reply to his Chinese Room argument

Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would "answer" by picking a bunch of shapes in another proper order. Now, did you just learn any meaning behind any language? Programs manipulate symbols this way.
i am bored

"Machine learning" is a widely abused term

Machines don't actually "learn".

AI textbooks readily admit that the "learning" in "machine learning" isn't referring to learning in the usual sense of the word.

Yet, I've encountered many people, even those in the AI field, who conflate this specialized term with the usual sense of learning.

https://www.cs.swarthmore.edu/~meeden/cs63/f11/ml-intro.pdf

Start reading from end of page 4. Note that the word "experience" isn't used in the conventional sense, either.


For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word "learning," we will simply adopt our technical definition of the class of programs that improve through experience.


"Experience" isn't just data collection, either. Cue The Knowledge Argument.

https://plato.stanford.edu/entries/qualia-knowledge/#2
Киану Ривз

АНТИЯЗЫК (ПО ТУ СТОРОНУ ФИЛОСОФИИ ЯЗЫКА)

Page1.jpg



Книга для широкого круга избранных. Автор - кандидат философских наук Алексей Сергеевич Нилогов. Посвящена новой философской дисциплине - философии антиязыка. Используется теория множеств: по определению автора, антиязык есть совокупность классов и подклассов антислов. В философии антиязыка разрабатываются специальные методы усмотрения невоязыковляемых сущностей, а также наиболее эффективные способы именования вещей. Антислово трактуется как такая антиязыковая единица, которая полностью или частично не может быть выражена в языке, а именно – воплотиться как полноценное слово в совокупности всех его признаков.



Всем желающим могу прислать бесплатно пдф-файл книги...