HTML

Asszociációk

Friss topikok

  • Italo Romano: www.huffingtonpost.com/entry/the-goldman-sachs-effect_us_588eabf7e4b0b065cbbcd9d9?c9v1xeh0c69smunm... (2017.01.30. 11:32) How a bank conquered Washington?
  • Minnesota: Érthető. A honlapodra angol nyelven felrakott Heidegger kivonatból is legalább annyi érthető, hogy... (2016.12.02. 19:56) Dilthey
  • Italo Romano: Why had the promise of the nineteenth century been dashed? Why had much of the twentieth century t... (2016.10.27. 16:46) Eurodollárok, két háború, reciklálás
  • kalassó: A Wiki írja: Orhan Pamuk 2006-ban irodalmi Nobel-díjat kapott. 2005-ben különböző török szervezete... (2016.07.13. 18:57) Musza Dagh retro
  • Italo Romano: Újabb kutatások szerint ez volt a legrégibb civilizáció, kétezer évvel idősebb, mint gondolták: ww... (2016.06.04. 21:39) Harappai képek

Címkék

abbé (1) acheul (1) agancsos (1) agóra (1) agrárvidék (1) agy (1) agyagtáblák (1) ain ghazal (1) államháztartás (1) állatok (1) állatvilág (1) antropológus (1) anya (1) árfolyam (1) asszociációk (3) átalakulás (1) ateista (1) bank (1) barát (2) barbár (1) barlang (2) becsmérlő (1) being ahead (1) benépesítés (1) berber (1) bérgyilkos (1) beszéd (1) bizonyosság (1) boldogság (1) bölény (2) bonobó (1) bonobo (1) borneo (1) borneó (2) brahmanistaák (1) breuil (1) bukás (1) capsa (1) civilizáció (1) computer circuits (1) cripple empires (1) csatornázási (1) cselekvés (1) csoukoutien (1) deconstruction (1) délafrika (1) devizahitel (1) dionüszosz (1) disznók (1) dns (1) dohányzás (1) drive (1) dzsainizmus (1) egyeduralom (1) egyensúly (1) egység (1) együttérzés (1) elefánt (1) elemzés (1) élet (1) életfelfogás (1) életvédelem (1) ellenfél (1) első (1) emberek (1) emberség (1) embervilág (2) epikurosz (1) erectus (2) eredeti (1) erkölcs (2) érzések (1) esélyek (1) euripidész (1) expedíció (1) fanatikus (1) fantázia (1) féltudás (1) flintstone (1) fogyasztó (1) forcast (1) forint (1) forrásművek (1) frobenius (1) fülöp móric (1) fürdőszoba (1) Geisteswissenschaften (1) génkódoló (1) geo-stratégia (1) görögök (1) győzelem (2) háború (2) hadtest (1) halál (1) hamita (1) hanok (1) hatáskörök (1) hellének (1) hérosz (1) hiány (1) himalája (1) hit (1) hitelezés (1) hiúság (1) homo (1) honlap (1) HP (1) igazság (1) indus-völgy (1) írás (2) isten (1) istentelen (1) jóslás (1) józanész (1) kagyló (1) kamat (1) kannibál (1) karrier (1) kasza l (1) kasztrendszer (1) kelet (1) kényszer (1) képek (1) képírás (1) kéz (1) kezek (1) kinyilatkoztatás (1) kiváncsiság (1) kőbalta (1) költségvetés (2) konstruktőr (1) kontextus (1) könyv (1) koponya (1) kormányzás (1) krasznahorkai (1) kultusz (1) kutya (1) lap (1) leírás (1) leleplező (1) leroi gourhan (1) leszármazási (1) light catastrophes (1) majmok (1) megelőzés (1) megismerés (1) megtisztulás (1) megvallás (1) megváltott (1) mehrgarh (2) Millenium (1) mindennapok (1) mítoszok (1) modernitás (1) moirák (1) mungo (1) múzsák (1) nabta (1) nedves (1) nemzetmentő (1) nőideál (1) nomádok (1) nyugat (1) nyugodt (1) nyugtalanság (1) ókor (1) olajbogyó (1) öngyűlölet (1) önszeretet (1) örmény (1) ősember (1) őshonos (1) ötven (1) panthéon (1) papkirály (1) parancs (1) particsiga (2) pártok (1) patetikus (1) perzsák (1) piaci behatolás (1) pingvin (1) pontosság (1) poszt-teista (1) pozitúra (1) prepolitikai (1) püthia (1) respublika (1) restauráció (1) riválisok (1) sámánizmus (1) sapiens (2) sár (1) segítők (1) sokistenhit (1) spektrum (1) strucctojás (1) szahara (1) szakóca (1) szakrális (1) szaporulat (1) szatíra (1) szél (1) szenthely (1) szentszöveg (1) szépség (1) szicilia (1) szilva (1) szolgáltató (1) társadalombiztosítás (1) teherszállítás (1) tekintély (1) termékenység (1) titkok (1) tömeg (2) törvény (1) törvényhozó (1) tudás (1) tűz (1) tűzkő (1) újságcikk (1) vadállomány (1) vadász (1) vadászat (1) vadászok (1) vagyon (1) vallás (1) várandós (1) vásár (1) Vatikán (1) vérbosszú (1) verseny (1) vértesszőllős (1) világegész (1) világnap (1) világrend (1) világtörvény (1) villámlás (1) viselkedés (2) vízözön (1) yarmuk (1) yogi (1) zseni (1) zsugorodás (1) Címkefelhő

From the Organon to the computer

2017.04.12. 17:52 Italo Romano

(Földes Károly ismerteti)

Excerpts

Chris Dixon

https://www.theatlantic.com/technology/archive/2017/03/aristotle-computer/518697/?utm_source=msn

 

THE HISTORY Of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Mathematics may be defined as the subject in which we never know what we are talking about.”

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic … and, finally, to collect … some probable intimations concerning the nature and constitution of the human mind.
He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:
In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of The Organon, such, with scarcely any essential change, it has continued to the present day.

Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that, since Aristotle, logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”

Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.

Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:

  • An object is what it is (Law of Identity)
  • No statement can be both true and false (Law of Non-contradiction)
  • Every statement is either true or false (Law of the Excluded Middle)

These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.

Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.

Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.

Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.

Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y.”

The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought “the work in which pure mathematics was discovered.”

Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”

He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

SHANNON

Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia)
SHANNON2

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

Shannon’s adder circuit (University of Virginia)

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a“relay switch” like those pictured in Shannon’s diagrams.

 

While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem,  but in the template for computer design it provided along the way.

Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.

He also imagined a machine that could process the language, which he called the calculus ratiocinator.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift. Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.

Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.

The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.

Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”

All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.

As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.

This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?

The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:

  • Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
  • Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)

Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.

In science, novelty emerges only with difficulty.”

Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.

On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.

The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.

The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this  “liar’s paradox” is the sentence:

This sentence is false.

If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.

Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)

Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.

Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)

Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).

The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.

Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:

Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.

Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.

As a byproduct, he found a mathematical model of an all-purpose computing machine.

Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:

Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.

This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.

Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.

During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.

Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.

As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.

Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.

In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.

Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.

This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own.”

Szólj hozzá!

Címkék: computer circuits

Why economists can't forcast

2017.03.09. 11:38 Italo Romano

Washington Post

By Robert J. Samuelson March 8 at 12:22 PM

You knew it all along: Economists can’t forecast the economy worth a hoot. And now we have a scholarly study that confirms it. Better yet, the corroboration comes from an impeccable source: the Federal Reserve.

The study compared predictions of important economic indicators — unemployment, inflation, interest rates, gross domestic product — with the actual outcomes. There were widespread errors. The study concluded that “considerable uncertainty surrounds all macroeconomic projections.”

Just how large were the mistakes? The report, though written mostly in technical jargon, gives a straightforward example:

“Suppose . . . the unemployment rate was projected to remain near 5 percent over the next few years, accompanied by 2 percent inflation. Given the size of past errors, we should not be surprised to see the unemployment rate climb to 7 percent or fall to 3 percent. . . . Similarly, it would not be at all surprising to see inflation as high as 3 percent or as low as 1 percent.”

These are huge margins of error. Clearly, much economic forecasting is guesswork. Worse, the gap between prediction and reality may be widening. The study — done by David Reifschneider of the Federal Reserve and Peter Tulip of the Reserve Bank of Australia — found that forecasting mistakes had worsened since the 2008-09 financial crisis.

An interesting question (which the study did not ask) is whether economic forecasting has improved in the past century. In the 1920s, with no computers, forecasters relied on random statistics: freight car loadings; grain harvests and prices; bank deposits. Today, forecasters employ elaborate computer models that scan dozens of statistical series describing the economy. Yet the predictions seem no better.

The implications are profound. If forecasts are inevitably flawed, there are bound to be recessions. Government officials — not only at the Federal Reserve but also in Congress and the White House — are condemned to make mistakes. Their vision of the future is blurred; therefore, their policies may blunder. The same is true of the private sector: Consumers and companies, misreading the future, may act to bring the economy down. They may borrow too much or spend too little.

The study compared forecasts from 1996 to 2015 not only from the Fed but also from the Congressional Budget Office, the Blue Chip Economic Indicators and the Survey of Professional Forecasters — these last two representing mainly private economists. Crowd behavior dominated; forecasts bunched together. “Differences in accuracy across forecasters are small,” write Reifschneider and Tulip. Naturally, the further forecasts probed the future, the worse their reliability.

The bedrock lesson here is as old as time: The future — not all of it, but much of it — is too complex to be predicted. There are too many moving parts; too much is unknown; people wrongly think the future will resemble the past; they don’t foresee political, economic and technological change.

Recent upsets to the economy confirm this ignorance. The biggest was the financial crisis and the Great Recession. But there were others: the fall of long-term interest rates on bonds and mortgages; the unexpectedly slow growth of the economic recovery accompanied by an unexpectedly rapid (and inconsistent) decline in unemployment; and the collapse of productivity.

Almost none of this was anticipated. It has been said that the past is a foreign country. So is the future.

 

Szólj hozzá!

Címkék: forcast

Parmenides, Aristotle, being, truth - Heidegger

2017.03.03. 10:42 Italo Romano

file-page108.jpg

Mentés

Szólj hozzá!

Contents of Care - Heidegger

2017.03.01. 17:08 Italo Romano

heid100.JPG

Mentés

Szólj hozzá!

Címkék: being ahead

Care shaped us - Heidegger

2017.02.28. 08:28 Italo Romano

heid102.JPG

Szólj hozzá!

How a bank conquered Washington?

2017.01.30. 11:32 Italo Romano

Földes Károly írja a blogot

 

http://www.huffingtonpost.com/entry/the-goldman-sachs-effect_us_588eabf7e4b0b065cbbcd9d9?c9v1xeh0c69smunmi&

1 komment

Britannica online on deconstruction

2017.01.29. 21:29 Italo Romano

 Földes Károly írja a blogot

 

Part One:

"Deconstruction:

school or movement of literary criticism initiated by

Jacques Derrida in France, who, in a series of books

published beginning in the late 1960s, launched a major

critique of traditional Western metaphysics.

Deconstruction may be thought of as an examination of

methodology. Like Sigmund Freud's psychological

theories and Karl Marx's political theories, Derrida's

deconstructive strategies, which take off from

Ferdinand de Saussure's insistence on the arbitrariness

of the verbal sign, have subsequently established

themselves as an important part of postmodernism,

especially in poststructural literary theory and text

analysis. Though the deconstructive principles of

Derrida and later critics are well established, they

remain somewhat controversial.

The deconstruction of philosophy involves the

questioning of the many hierarchical oppositions--such

as cause and effect, presence and absence, speech

("phonocentrism") and writing--in order to expose the

bias (the privileged terms) of those tacit assumptions

on which Western metaphysics rest. Deconstruction takes

apart the logic of language in which authors make their

claims, a process that reveals how all texts undermine

themselves because every text includes unconscious

"traces" of other positions exactly opposite to that

which it sets out to uphold. Deconstruction undermines

"logocentrism" (literally, a focus on the word, the

original and originating word in relation to which

other concepts such as truth, identity, and certainty

can be validated; but understood more generally as a

belief in reason and rationality, the belief that

meaning inheres in the world independently of any human

attempt to represent it in words). It follows from this

view that the "meaning" of a text bears only accidental

relationship to the author's conscious intentions. One

of the effects of deconstructive criticism has been a

loosening of language from concepts and referents.".

To cite this page:

"deconstruction" Britannica Online.

<http://www.eb.com:180/cgi-bin/g?DocF=micro/163/11.html>

[Accessed 01 March 1998].

 

Part Two

"Derrida's thought is based on his disapproval of the

search for some ultimate metaphysical certainty or

source of meaning that has characterized most Western

philosophy. In his works he offers a way of reading

philosophic texts, called "deconstruction," which

enables him to make explicit the metaphysical

suppositions and a priori assumptions used even by

those philosophers who are the most deeply critical of

metaphysics. Derrida eschewed the holding of any

philosophical doctrine and instead sought to analyze

language in an attempt to provide a radical alternative

perspective in which the basic notion of a

philosophical thesis is called into question."

-------------------------------------------------------

To cite this page:

"Derrida, Jacques" Britannica Online.

<http://www.eb.com:180/cgi-bin/g?DocF=micro/166/22.html>



[Accessed 01 March 1998].



 



 

Szólj hozzá!

Címkék: deconstruction

Legutóbbi bejegyzésem történeti mozzanatai

2017.01.29. 18:24 Italo Romano

 Földes Károly írja a blogot

(Baudrillard says:)

We have been reproached for the atomic age – but finally [!] we have managed to suspend the equilibrium of terror and have decisively (?) deferred the conclusive event. Now that dissuasion has succeeded, we have to get used to the idea that there is no longer any end, there will no longer be any end and that history itself has become interminable. Consequently, when one speaks of “the end of history”, of “the end of the political”, of “the end of the social”, of “the end of ideologies”, none of this is true. The worst indeed is that there is no end to anything and that everything will continue to take place in a slow, fastidious, recurring and all-encompassing hysterical manner – like nails and hair continue to grow after death. Fundamentally, of course, all this is already dead and instead of a joyous or tragic resolution, instead of a destiny, we are left with an vexatious homeopathic end or outcome that is secreted into metastatic resistances to death. In the wake of all that resurfaces, history backtracks on its own footsteps in a compulsive attempt at rehabilitation, as if in a recompense for some sort of crime I am not aware of – a crime committed by and in spite of us, a kind of crime done to oneself, the process of which is sped up in our contemporary phase of history and the sure signs of which today are global waste, universal repentance and resentment [ressentiment] – a crime where the lawsuit needs to be re-examined and where we have to be unrelenting to go back as far as the origins, if necessary, in quest of retrospective absolution since there is no resolution to our fate in the future.

...

Against the simulation of a linear history “in progress”, we must privilege these rekindled flames, these malignant curves, these light catastrophes which cripple empires much convincingly than major shakeups could ever do. Anastrophe versus catastrophe. Could it be that deep down there may have never been a linear unfolding of history, there may have never been a linear unfolding of language? Everything moves in loops and curls, in tropes, in inversion of meaning, except for numeric and artificial languages which, for this very reason, have neither of these. Everything takes place in effects that short-circuit (metaleptic) causes, in factual Witz, in perverse events, in ironic turnarounds, except for a rectified history which, properly speaking, cannot be such.

 

Szólj hozzá!

Címkék: light catastrophes cripple empires

Baudrillard ezredvégi gondolatai

2017.01.29. 17:45 Italo Romano

 Földes Károly írja a blogot

 

Hystericizing the Millennium

Jean Baudrillard

The fact that we are entering on a retroactive form of history, that all our ideas, philosophies, mental faculties are progressively adapting themselves to this model, is quite evident. This may just as well be an adventure, since the disappearance of the end is, in itself, an original or creative situation. It seems to be characteristic of our culture and our history which have no end in sight either as guarantors of an indefinite recurrence, of an immortality pursued in the opposite direction. Up till now, immortality was conceived of as a region of the beyond, an immortality yet to come, today however, we have concocted another type of immortality, one on this side of the fence that incorporates the recession of outcomes ad infinitum.

The situation may be original, but the final result or outcome of things is evidently lost in advance or up front. We will never get to know the original chaos, the Big Bang, and because it is a classified event, we had never been there. We could retain the hope however, of seeing the final moment, the Big Crumb, one day. A spasmodic enjoyment of the end to compensate for not having had the chance to revere the beginning [l'origine]. These are the only two interesting moments, and since we were frustrated with the first one, we invest all the more energy into the acceleration of the end, into the precipitation of things or events towards their ultimate loss, a loss from which we were at least thrown the crumbs in the form of the spectacle. Dreaming of an unprecedented opportunity open to a generation to obliterate the end of the world, which is just as wonderful as being part of the beginning. But we have arrived too late for things to begin, only the end or outcome seems to careen under our sway.

We have been reproached for the atomic age - but finally [!] we have managed to suspend the equilibrium of terror and have decisively (?) deferred the conclusive event. Now that dissuasion has succeeded, we have to get used to the idea that there is no longer any end, there will no longer be any end and that history itself has become interminable. Consequently, when one speaks of "the end of history", of "the end of the political", of "the end of the social", of "the end of ideologies", none of this is true. The worst indeed is that there is no end to anything and that everything will continue to take place in a slow, fastidious, recurring and all-encompassing hysterical manner - like nails and hair continue to grow after death. Fundamentally, of course, all this is already dead and instead of a joyous or tragic resolution, instead of a destiny, we are left with an vexatious homeopathic end or outcome that is secreted into metastatic resistances to death. In the wake of all that resurfaces, history backtracks on its own footsteps in a compulsive attempt at rehabilitation, as if in a recompense for some sort of crime I am not aware of - a crime committed by and in spite of us, a kind of crime done to oneself, the process of which is sped up in our contemporary phase of history and the sure signs of which today are global waste, universal repentance and resentment [ressentiment] - a crime where the lawsuit needs to be re-examined and where we have to be unrelenting to go back as far as the origins, if necessary, in quest of retrospective absolution since there is no resolution to our fate in the future. It is imperative that we find out what went wrong and at which moment and then begin examining the traces left on the trail leading up to the present time, to turn over all the rocks of history, to revive the best and the worst in a vain attempt to separate the good from the bad. Following Canetti's hypothesis: we have to return to this side of the fatal line of demarcation which, in history, has kept the human separate from the inhuman, a line that we, at some point, have thoughtlessly crossed under the spell and vertigo of some sort of anticipated liberatory effect. Arguably, it is possible that our collective panic in the face of this blind spot of going beyond history and its ends (then again, what are these ends? all we know is that we've crossed them without noticing that we did) tempts us to take hastening steps backwards in order to escape this simulation in the void. To relocate the zone or point of reference, the earlier scene of a Euclidean space of history. This is what the events of Eastern Europe pretended to embark on by way of peoples' movement and the democratic process. The Gulf War was also an effort to re-open the space of war, of a founding violence to usher in the new world order.

All of these instances failed. This revival of vanished or vanishing forms, this attempt to escape a virtual apocalypse is a utopia, in fact the last of our utopias - the more we try to rediscover the real and the point of reference, the more we sink ourselves into a simulation that has now become shameful and utterly hopeless.

...

We are therefore in an impossible situation, unable to dream either of a past or of a future state of affairs. The situation has literally become definitive - not finite, infinite, or defined but de-finitive, i.e., deprived of its end, pilfered. Consequently, the distinctive sentiment of the definitive, with its pull towards a paradisaic state of affairs, is melancholy. Whereas in the case of mourning, things find their end and, with it, the possibility of an eventual return, in melancholy we no longer hold on to the premonition of the end or of a return, all we are left with is the resentment [ressentiment] of disappearance. It's a bit like the twilight [crepuscular] profile of the turn of this century, the double-faced Gestalt of a linear order, of progress on the one hand, of regression of goals and values, on the other.

To oppose this movement in both directions at once, there is the utterly improbable, and certainly unverifiable, hypotheses of a poetic reversibility of events and the only proof we have of it is the possibility of this in language.

Poetic form is not far removed from chaotic form. Both of them disregard the law of cause and effect. If, in the theory of Chaos, we substitute sensitive reliance upon initial conditions for susceptible dependency upon final conditions, we enter upon the form of predestination, i.e., that of destiny. Poetic language itself abides in predestination, in the imminence of its own end, and thrives on the reversibility of the end in the beginning. In this sense, it is predestined - an unconditional event without any signification or consequence, one that flourishes singularly in the vertigo of its final resolution.

Although this is obviously not the form of our current history, there is, nevertheless, an affinity between the immanence of poetic unfolding and the immanence of our current chaotic progression as events themselves are without any signification or consequence, and because effect stands in for the cause, we have arrived at a point where there are no longer any causes, all we are left with are effects. The world presents itself to us, effectively. There is no longer any reason for it, and God is dead.

If all that remains are effects, we are in total illusion (which is also that of poetic language). If effect is to be found in the cause, or the beginning is in the end, then the catastrophe is behind us. This is the exclusive privilege of our epoch, i.e., the reversal of the sign of catastrophe. This liberates us from all possible future catastrophes, and also exempts us from all responsibility pertaining to it. An end to all preventive psychosis, no more panic, no more remorse! The lost object is behind us. We are free from the Last Judgment.

What stems or follows from all of this is some sort of poetic and ironic analysis of events. Against the simulation of a linear history "in progress", we must privilege these rekindled flames, these malignant curves, these light catastrophes which cripple empires much convincingly than major shakeups could ever do. Anastrophe versus catastrophe. Could it be that deep down there may have never been a linear unfolding of history, there may have never been a linear unfolding of language? Everything moves in loops and curls, in tropes, in inversion of meaning, except for numeric and artificial languages which, for this very reason, have neither of these. Everything takes place in effects that short-circuit (metaleptic) causes, in factual Witz, in perverse events, in ironic turnarounds, except for a rectified history which, properly speaking, cannot be such.

Couldn't we transpose onto social and historical phenomena language games like the anagram, acrostic, spoonerism, rhyme, strophe or stanza and catastrophe? And not only the stately figures of metaphor and metonymy but instantaneous, childish and formal games, sundry tropes that comprise the delicacies of a vulgar imagination? Are there social spoonerisms, an anagrammatic history (where meaning is dismembered and dispersed to the four winds of the earth, like the name of god in the anagram), rhyming forms of political action, events that can take on either this or that meaning? The palindrome, [A word, verse or sentence that reads the same backwards as forwards. Ex.: HannaH.] this poetic and rigorous figure of palinode [recantations] would do well to serve in this time of retroversion of history with a burning lecture (perhaps Paul Virilio's dromology could eventually be replaced with a palindromology?). And the anagram, this minute process that picks up the thread of language, this poetic and non-linear convulsion of sorts - makes one wonder whether there is a chance that history would lend itself to this poetic convulsion, to such a subtle form of return and anaphore and which, should the anagram yield beyond meaning, allow for the pure materiality of language to shine through and also show beyond historical meaning, the pure materiality of time?

This would be the enchanting alternative to the linearity of history, the poetic alternative to a disenchanted confusion, to the chaotic oversupply of current events.

Concurrent with this going beyond history is our entry into pure fiction, into the illusion of the world. The illusion of our history yields up and accedes to a space of a much more radical illusion of the world. Now that the eyes of the Revolution and on the Revolution are shut; now that the Wall of Shame has been demolished, now that the lips of dispute are sealed (with a sugar-coated history stuck to our palate); now that the spectre of communism, i.e., that of power no longer haunt Europe, no longer haunt the memories; now that the aristocratic illusion of origin and the democratic illusion of the end increasingly drift apart - we no longer have the choice to advance, 'to abide in our present destruction', nor to withdraw, only a last ditch effort to confront this radical illusion.

Originally published in French as part of Jean Baudrillard, L'Illusion de la fin: ou La greve des evenements (Paris: Galilee, 1992). Translation by Charles Dudas, York University.

 

 

 

Szólj hozzá!

Címkék: Millenium

Baudrillard játéka a realitás tagadásával

2017.01.27. 19:29 Italo Romano

Földes Károly írja a blogot

Jean Baudrillard 's game with the negation of reality

„The novel is a work of art not so much because of its inevitable resemblance with life but because of the insuperable differences that distinguish it from life.- Stevenson

And so is thought! Thought is not so much prized for its inevitable convergences with truth as it is for the insuperable divergences that separate the two.

It is not true that in order to live one has to believe in one's own existence. There is no necessity to that. No matter what, our consciousness is never the echo of our own reality, of an existence set in "real time." But rather it is its echo in "delayed time," the screen of the dispersion of the subject and of its identity - only in our sleep, our unconscious, and our death are we identical to ourselves. Consciousness, which is totally different from belief, is more spontaneously the result of a challenge to reality, the result of accepting objective illusion rather than objective reality. This challenge is more vital to our survival and to that of the human species than the belief in reality and in existence, which always refers to spiritual consolations pertaining to another world. Our world is such as it is, but that does not make it more real in any respect. "The most powerful instinct of man is to be in conflict with truth, and with the real."

The belief in truth is part of the elementary forms of religious life. It is a weakness of understanding, of common-sense. At the same time, it is the last stronghold for the supporters of morality, for the apostles of the legality of the real and the rational, according to whom the reality principle cannot be questioned. Fortunately, nobody, not even those who teach it, lives according to this principle, and for a good reason: nobody really believes in the real. Nor do they believe in the evidence of real life. This would be too sad.

But the good apostles come back and ask: how can you take away the real from those who already find it hard to live and who, just like you and me, have a right to claim the real and the rational? The same insidious objection is proclaimed in the name of the Third World: How can you take away abundance when some people are starving to death? Or perhaps: How can you take away the class struggle from all the peoples that never got to enjoy their Bourgeois revolution? Or again: How can you take away the feminist and egalitarian aspirations from all the women that have never heard of women's rights? If you don't like reality, please do not make everybody else disgusted with it! This is a question of democratic morality: Do not let Billancourt despair! You can never let people despair.

There is a profound disdain behind these charitable intentions. This disdain first lies in the fact that reality is instituted as a sort of life-saving insurance, or as a perpetual concession, as if it were the last of human rights or the first of everyday consumer products. But, above all, by acknowledging that people place their hope in reality only, and in the visible proof of their existence, by giving them a realism reminiscent of St. Sulpice, they are depicted as naive and idiotic. This disdain, let us acknowledge it, is first imposed on themselves by these defenders of realism, who reduce their own life to an accumulation of facts and proofs, of causes and effects. After all, a well-structured resentment always stems from one's own experience.

Say: I am real, this is real, the world is real, and nobody laughs. But say: this is a simulacrum, you are only a simulacrum, this war is a simulacrum, and everybody bursts out laughing. With a condescending and yellow laughter, or perhaps a convulsive one, as if it was a childish joke or an obscene invitation. Anything which belongs to the order of simulacrum is obscene or forbidden, similar to that which belongs to sex or death. However, our belief in reality and evidence is far more obscene. Truth is what should be laughed at. One may dream of a culture where everyone bursts into laughter when someone says: this is true, this is real.

All this defines the insoluble relationship between thought and the real. A certain type of thought is an accomplice of the real. It starts with the hypothesis that there is a real reference to an idea and that there is a possible "ideation" of reality. This is no doubt a comforting perspective, one which is based on meaning and deciphering. This is also a polarity, similar to that used by ready-made dialectical and philosophical solutions. The other thought, on the contrary, is ex-centric from the real. It is an "ex-centering" of the real world and, consequently, it is alien to a dialectic which always plays on adversarial poles. It is even alien to critical thought which always refers to an ideal of the real. To some extent, this thought is not even a denial of the concept of reality. It is an illusion, that is to say a "game" played with desire (which this thought puts "into play"), just like metaphor is a "game" played with truth. This radical thought comes neither from a philosophical doubt nor from a utopian transference (which always supposes an ideal transformation of the real). Nor does it stem from an ideal transcendence. It is the "putting into play" of this world, the material and immanent illusion of this so-called "real" world - it is a non-critical, non-dialectical thought. So, this thought appears to be coming from somewhere else. In any case, there is an incompatibility between thought and the real. Between thought and the real, there is no necessary or natural transition. Not an "alternation,"not an alternative either: only an "alterity" keeps them under pressure8. Only fracture, distance and alienation safeguard the singularity of this thought, the singularity of being a singular event, similar in a sense to the singularity of the world through which it is made into an event.

Things probably did not always happen this way. One may dream of a happy conjunction of idea and reality, in the shadow of the Enlightenment and of modernity, in the heroic ages of critical thought. But that thought, which operated against a form of illusion - superstitious, religious, or ideological - is substantially over. And even if that thought had survived its catastrophic secularization in all the political systems of the 20th century, the ideal and almost necessary relationship between concept and reality would in any case have been destroyed today. That thought disappeared under the pressure of a gigantic simulation, a technical and mental one, under the pressure of a precession of models to the benefit of an autonomy of the virtual, from now on liberated from the real, and of a simultaneous autonomy of the real that today functions for and by itself - motu propio - in a delirious perspective, infinitely self-referential. Expelled, so to speak, from its own frame, from its own principle, pushed toward its extraneity, the real has become an extreme phenomenon. So, we no longer can think of it as real. But we can think of it as "ex-orbitated," as if it was seen from another world - as an illusion then...”

(author's notes are omitted.)

Excerpt from a translation of Jean Baudrillard's La Pensee Radicale, published in the fall of 1994 in French by Sens & Tonka, eds., Collection Morsure, Paris, 1994. translated by

Francois Debrix of Purdue University.



 

Szólj hozzá!

süti beállítások módosítása