helios

Teubner, using Latour and Luhmann, to elucidate the legal theory of non-human agents

Personification of non-humans is best understood as a strategy of dealing with the uncertainty about the identity of the other, which moves the attribution scheme from causation to double contingency and opens the space for presupposing the others' self-referentiality. But there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems, as Luhmann argues. Personifying other non-humans is a social reality today and a political necessity for the future. The admission of actors does not take place, as Latour suggests, into one and only one collective. Rather, the properties of new actors differ extremely according to the multiplicity of different sites of the political ecology.

- G. Teubner, "Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law" (link)

This is a fascinating article.

Putting it into the context of the debates that used to rage here, a few things must be said:

- "Continental philosophy" is the primary philosophical reference point in German legal scholarship. Their whole legal system is built on that tradition. Grokking that, which took a great deal of time, really made be grateful to the exposure to continental theory here. Because now the EU, which is largely based on German legal theory, is leading the world in technology law.

- The two theorists listed here are the Great Names in continental philosophical theory as of 10-20 years ago. It's funny that we never talked about them here. In my view, Latour is terrible, and Luhmann is fantastic. But your mileage may vary.

- Latour got famous riding out the Science Wars on a social constructivist platform. This was done under the auspices of a social science research, specifically ethnographies of laboratories. It's very poorly positioned philosophically, in my opinion, but nevertheless became wildly popular. My best guess as to why was that it was what mediocre people think smart people sound like. He's changed his tune since and now he has so many positions he's hard to track.

- Luhmann is a 'system theorist' of social science, a student of Talcott Parsons who drew a lot from second-order cyberneticists Maturana and Varela, who are _amazing_. Luhmann got some recognition for his epic argument with Habermas, who is/was of course the culmination of the Frankfurt School field. Luhmann is, in many ways, the cybernetics/pragmatist/engineering-mentality as reflected back into high German social theory. And even in Germany, he largely won the argument--German scholars you meet are far more likely to speak of their Luhmann studies than their Habermas studies.

The combination of the two is a bit unholy. But it's a novel approach to a significant practical problem that requires philosophical insight to address: how to deal with all the artificial 'agents' that are not really 'persons' per se.

Food for thought, in case anybody check this place any more. Looks like @nanikore is still around...
  • Current Mood
    pedantic
i am bored

This is even better than a Chinese Room.

A general "Language Room".

https://www.technologyreview.com/s/612960/an-ai-tool-auto-generates-fake-news-bogus-tweets-and-plenty-of-gibberish/

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks.


I don't suppose some people would give these algorithms rights if they take on the appearance of language "comprehension"?
i am bored

Genes aren't programming code

I've had the displeasure of reading various genes-as-programming-code and genes-as-programming analogies.

If gene expression pokes a gaping hole in those analogies, new scientific findings absolutely obliterate them:

https://www.quantamagazine.org/omnigenic-model-suggests-that-all-genes-affect-every-complex-trait-20180620/

Show me a program in which every segment of every line of code influences absolutely everything. Programs don't work that way, there's no such program, and genes don't work like programs... not even close.

Genes don't spell out a compartmentalized programming language. Genes aren't programming code.

Not only minds are gestalts; biological organisms are as well.

Let's put those abusive and broken analogies to pasture once and for all.
i am bored

The following is my restatement of Searle's rejoinder to System Reply to his Chinese Room argument

Let's try this theoretical thought exercise. You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in, so that if you see a bunch of shapes in a certain order, you would "answer" by picking a bunch of shapes in another proper order. Now, did you just learn any meaning behind any language? Programs manipulate symbols this way.
i am bored

"Machine learning" is a widely abused term

Machines don't actually "learn".

AI textbooks readily admit that the "learning" in "machine learning" isn't referring to learning in the usual sense of the word.

Yet, I've encountered many people, even those in the AI field, who conflate this specialized term with the usual sense of learning.

https://www.cs.swarthmore.edu/~meeden/cs63/f11/ml-intro.pdf

Start reading from end of page 4. Note that the word "experience" isn't used in the conventional sense, either.


For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word "learning," we will simply adopt our technical definition of the class of programs that improve through experience.


"Experience" isn't just data collection, either. Cue The Knowledge Argument.

https://plato.stanford.edu/entries/qualia-knowledge/#2
Киану Ривз

АНТИЯЗЫК (ПО ТУ СТОРОНУ ФИЛОСОФИИ ЯЗЫКА)

Page1.jpg



Книга для широкого круга избранных. Автор - кандидат философских наук Алексей Сергеевич Нилогов. Посвящена новой философской дисциплине - философии антиязыка. Используется теория множеств: по определению автора, антиязык есть совокупность классов и подклассов антислов. В философии антиязыка разрабатываются специальные методы усмотрения невоязыковляемых сущностей, а также наиболее эффективные способы именования вещей. Антислово трактуется как такая антиязыковая единица, которая полностью или частично не может быть выражена в языке, а именно – воплотиться как полноценное слово в совокупности всех его признаков.



Всем желающим могу прислать бесплатно пдф-файл книги...
i am bored

It is untrue that "science will eventually discover all systematic details and answer all questions"

Underdetermination of Scientific Theory

https://plato.stanford.edu/entries/scientific-underdetermination/

An especially pertinent section quoted below:

2.1 Holist Underdetermination: The Very Idea

Duhem's original case for holist underdetermination is, perhaps unsurprisingly, intimately bound up with his arguments for confirmational holism: the claim that theories or hypotheses can only be subjected to empirical testing in groups or collections, never in isolation. The idea here is that a single scientific hypothesis does not by itself carry any implications about what we should expect to observe in nature; rather, we can derive empirical consequences from an hypothesis only when it is conjoined with many other beliefs and hypotheses, including background assumptions about the world, beliefs about how measuring instruments operate, further hypotheses about the interactions between objects in the original hypothesis' field of study and the surrounding environment, etc. For this reason, Duhem argues, when an empirical prediction turns out to be falsified, we do not know whether the fault lies with the hypothesis we originally sought to test or with one of the many other beliefs and hypotheses that were also needed and used to generate the failed prediction:

A physicist decides to demonstrate the inaccuracy of a proposition; in order to deduce from this proposition the prediction of a phenomenon and institute the experiment which is to show whether this phenomenon is or is not produced, in order to interpret the results of this experiment and establish that the predicted phenomenon is not produced, he does not confine himself to making use of the proposition in question; he makes use also of a whole group of theories accepted by him as beyond dispute. The prediction of the phenomenon, whose nonproduction is to cut off debate, does not derive from the proposition challenged if taken by itself, but from the proposition at issue joined to that whole group of theories; if the predicted phenomenon is not produced, the only thing the experiment teaches us is that among the propositions used to predict the phenomenon and to establish whether it would be produced, there is at least one error; but where this error lies is just what it does not tell us. ([1914] 1954, 185)

Duhem supports this claim with examples from physical theory, including one designed to illustrate a celebrated further consequence he draws from it. Holist underdetermination ensures, Duhem argues, that there cannot be any such thing as a “crucial experiment”: a single experiment whose outcome is predicted differently by two competing theories and which therefore serves to definitively confirm one and refute the other. For example, in a famous scientific episode intended to resolve the ongoing heated battle between partisans of the theory that light consists of a stream of particles moving at extremely high speed (the particle or “emission” theory of light) and defenders of the view that light consists instead of waves propagated through a mechanical medium (the wave theory), the physicist Foucault designed an apparatus to test the two theories' competing claims about the speed of transmission of light in different media: the particle theory implied that light would travel faster in water than in air, while the wave theory implied that the reverse was true. Although the outcome of the experiment was taken to show that light travels faster in air than in water,[3] Duhem argues that this is far from a refutation of the hypothesis of emission:

in fact, what the experiment declares stained with error is the whole group of propositions accepted by Newton, and after him by Laplace and Biot, that is, the whole theory from which we deduce the relation between the index of refraction and the velocity of light in various media. But in condemning this system as a whole by declaring it stained with error, the experiment does not tell us where the error lies. Is it in the fundamental hypothesis that light consists in projectiles thrown out with great speed by luminous bodies? Is it in some other assumption concerning the actions experienced by light corpuscles due to the media in which they move? We know nothing about that. It would be rash to believe, as Arago seems to have thought, that Foucault's experiment condemns once and for all the very hypothesis of emission, i.e., the assimilation of a ray of light to a swarm of projectiles. If physicists had attached some value to this task, they would undoubtedly have succeeded in founding on this assumption a system of optics that would agree with Foucault's experiment. ([1914] 1954, p. 187)

From this and similar examples, Duhem drew the quite general conclusion that our response to the experimental or observational falsification of a theory is always underdetermined in this way. When the world does not live up to our theory-grounded expectations, we must give up something, but because no hypothesis is ever tested in isolation, no experiment ever tells us precisely which belief it is that we must revise or give up as mistaken:

In sum, the physicist can never subject an isolated hypothesis to experimental test, but only a whole group of hypotheses; when the experiment is in disagreement with his predictions, what he learns is that at least one of the hypotheses constituting this group is unacceptable and ought to be modified; but the experiment does not designate which one should be changed. ([1914] 1954, 187)

The predicament Duhem here identifies is no rainy day puzzle for philosophers of science, but a methodological challenge that constantly arises in the course of scientific practice itself. It is simply not true that for practical purposes and in concrete contexts a single revision of our beliefs in response to disconfirming evidence is always obviously correct, or the most promising, or the only or even most sensible avenue to pursue. To cite a classic example, when Newton's celestial mechanics failed to correctly predict the orbit of Uranus, scientists at the time did not simply abandon the theory but protected it from refutation by instead challenging the background assumption that the solar system contained only seven planets. This strategy bore fruit, notwithstanding the falsity of Newton's theory: by calculating the location of a hypothetical eighth planet influencing the orbit of Uranus, the astronomers Adams and Leverrier were eventually led to discover Neptune in 1846. But the very same strategy failed when used to try to explain the advance of the perihelion in Mercury's orbit by postulating the existence of “Vulcan”, an additional planet located between Mercury and the sun, and this phenomenon would resist satisfactory explanation until the arrival of Einstein's theory of general relativity. So it seems that Duhem was right to suggest not only that hypotheses must be tested as a group or a collection, but also that it is by no means a foregone conclusion which member of such a collection should be abandoned or revised [My note: i.e. where we arrive at the new theory e.g. Einstein's General Relativity]in response to a failed empirical test or false implication. Indeed, this very example illustrates why Duhem's own rather hopeful appeal to the ‘good sense’ of scientists themselves in deciding when a given hypothesis ought to be abandoned promises very little if any relief from the general predicament of holist underdetermination.

As noted above, Duhem thought that the sort of underdetermination he had described presented a challenge only for theoretical physics, but subsequent thinking in the philosophy of science has tended to the opinion that the predicament Duhem described applies to theoretical testing in all fields of scientific inquiry. We cannot, for example, test an hypothesis about the phenotypic effects of a particular gene without presupposing a host of further beliefs about what genes are, how they work, how we can identify them, what other genes are doing, and so on. And in the middle of the 20th Century, W. V. O. Quine would incorporate confirmational holism and its associated concerns about underdetermination into an extraordinarily influential account of knowledge in general. As part of his famous (1951) critique of the widely accepted distinction between truths that are analytic (true by definition, or as a matter of logic or language alone) and those that are synthetic (true in virtue of some contingent fact about the way the world is), Quine argued instead that all of the beliefs we hold at any given time are linked in an interconnected web, which encounters our sensory experience only at its periphery:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges. Or, to change the figure, total science is like a field of force whose boundary conditions are experience. A conflict with experience at the periphery occasions readjustments in the interior of the field. But the total field is so underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to reevaluate in the light of any single contrary experience. No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole. (1951, 42–3)

One consequence of this general picture of human knowledge is that any and all of our beliefs are tested against experience only as a corporate body—or as Quine sometimes puts it, “The unit of empirical significance is the whole of science” (1951, p. 42).[4] A mismatch between what the web as a whole leads us to expect and the sensory experiences we actually receive will occasion some revision in our beliefs, but which revision we should make to bring the web as a whole back into conformity with our experiences is radically underdetermined by those experiences themselves. If we find our belief that there are brick houses on Elm Street to be in conflict with our immediate sense experience, we might revise our beliefs about the houses on Elm Street, but we might equally well modify instead our beliefs about the appearance of brick, or about our present location, or innumerable other beliefs constituting the interconnected web—in a pinch we might even decide that our present sensory experiences are simply hallucinations! Quine's point was not that any of these are particularly likely responses to recalcitrant experiences (indeed, an important part of his account is the explanation of why they are not), but instead that they would serve equally well to bring the web of belief as a whole in line with our experience. And if the belief that there are brick houses on Elm Street were sufficiently important to us, Quine insisted, it would be possible for us to preserve it “come what may” (in the way of empirical evidence), by making sufficiently radical adjustments elsewhere in the web of belief. It is in principle open to us, Quine argued, to revise even beliefs about logic, mathematics, or the meanings of our terms in response to recalcitrant experience; it might seem a tempting solution to certain persistent difficulties in quantum mechanics, for example, to reject classical logic's law of the excluded middle (allowing physical particles to both have and not have some determinate classical physical property like position or momentum at a given time). The only test of a belief, Quine argued, is whether it fits into a web of connected beliefs that accords well with our experience on the whole. And because this leaves any and all beliefs in that web at least potentially subject to revision on the basis of our ongoing sense experience or empirical evidence, he insisted, there simply are no beliefs that are analytic in the originally supposed sense of immune to revision in light of experience or true no matter what the world is like.

Quine recognized, of course, that many of the logically possible ways of revising our beliefs in response to recalcitrant experiences that remain open to us strike us as ad hoc, perfectly ridiculous, or worse. He argues (1955) that our actual revisions of the web of belief seek to maximize the theoretical “virtues” of simplicity, familiarity, scope, and fecundity, along with conformity to experience, and elsewhere suggests that we typically seek to resolve conflicts between the web of our beliefs and our sensory experiences in accordance with a principle of “conservatism”, that is, by making the smallest possible number of changes to the least central beliefs we can that will suffice to reconcile the web with experience. That is, Quine recognized that when we encounter recalcitrant experience we are not usually at a loss to decide which of our beliefs to revise in response to it, but he claimed that this is simply because we are strongly disposed as a matter of fundamental psychology to prefer whatever revision requires the most minimal mutilation of the existing web of beliefs and/or maximizes virtues that he explicitly characterizes as pragmatic. [My note: Every single adjustment is thus BIASED toward what we mentally deem expedient to the solution] Indeed, it would seem that on Quine's view the very notion of a belief being more central or peripheral or in lesser or greater “proximity” to sense experience should be cashed out simply as a measure of our willingness to revise it in response to recalcitrant experience. That is, it would seem that what it means for one belief to be located “closer” to the sensory periphery of the web than another is simply that we are more likely to revise the first than the second if doing so would enable us to bring the web as a whole into conformity with otherwise recalcitrant sense experience. Thus, Quine saw the traditional distinction between analytic and synthetic beliefs as simply registering the endpoints of a psychological continuum ordering our beliefs according to the ease and likelihood with which we are prepared to revise them in order to reconcile the web as a whole with our sense experience.
беседа

Катехон-ТВ: "Влияние либеральной интеллигенции на революцию 1917 года"

Оригинал взят у arkadiy_maler в Катехон-ТВ: "Влияние либеральной интеллигенции на революцию 1917 года"


В программе Катехон-ТВ выступает - Федор Александрович Гайда, историк, публицист, кандидат исторических наук, доцент исторического факультета МГУ.

Тема - "Влияние либеральной интеллигенции на революцию 1917 года".

- либералы и левые: различия и сходства
- либерализм радикальный и консервативный
- от Герцена и Чичерина до Милюкова и Струве
- октябристы, кадеты, эсеры, эсдеки
- Веховцы и Февральская революция
i am bored

Agnostic instrumentalism toward metaphysical models

No proof can be offered to any particular metaphysical model via any means. It’s a matter of practical efficacy, not metaphysical truth. Neither paucity nor abundance of imagination points to such truth, i.e. “I can / can’t imagine it being any other way, therefore it is true / false.”

Metaphysical models are not for obtaining the actual state of affairs of the metaphysical world. They are tools. The only thing that could be done is ascertaining how useful a metaphysical model is via gauging its practical utility by thinking and acting as if the model is true.
i am bored

The pseudoscience of memetics

Memetics is a sham model. It does not reflect reality. Memetics assumes boundaries around memes where there is NONE. There is no real boundary because how each supposed meme is "seen" by each mind is subjective in nature. Pseduoscience. "Memeticists" can pump out all kinds of numbers based on this sham model.