Perhaps I am overly emphatic over the lack of any effective human response to the obliteration of almost all major forms of life due to the human dynamic of liberating all that climate destroying carbon buried for millions of years. I imagine evolution does not favor a species that is enthusiastic over suicide and it quite firmly indicates that humanity has no claim whatsoever on being the most advance species.
Artificial intelligence: doom or survival?
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Artificial intelligence: doom or survival?
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
I guess I am trying to recognise the good and the bad. And that overcoming the bad is far from trivial.
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
Re: Artificial intelligence: doom or survival?
Eduk,Eduk wrote: ↑March 13th, 2018, 1:14 pm Frost if I just take your first paper from Bem it quickly becomes obvious that it is far from the consensus of expert opinion. Although well done for finding woo that is at least published.
This article is in particular interesting (though probably not to you) as it both tackles Bem's problems in an easy to understand manner and talks about the issue you raised with medical research (not the given that you assumed - which is to be expected as you aren't an expert).
https://theness.com/neurologicablog/ind ... -research/
Thanks for posting this. This is something that has specific criticisms that can be addressed. I will attempt to do that here:
The statement that with “a vanishingly small prior probability, you need extraordinary evidence to be taken seriously” is pseudo-scientific nonsense. This is a reference to the use of Bayesian analysis and the selection of a prior probability. There was a published analysis criticizing the frequentist analysis used by Bem, claiming that psychologists need to change how they analyze data and Bayesian analysis was used to claim that the effects were not significant.For experienced skeptics, this was not much of a surprise. When dealing with claims that have a vanishingly small prior probability, you need extraordinary evidence to be taken seriously, and this wasn’t it. We were already very familiar with these kinds of results – if you squint just right there is a teeny tiny effect size. But we already knew that experiments are easy to fudge, even unwittingly, and it would therefore take a lot more to rewrite all the physics textbooks. (What is more likely, that the fundamental nature of reality is not what we thought, or Bem was a little sloppy in his research?) The key (as acknowledged by Bem himself) would be in replication.
The problem is, the prior probability in Bayesian analysis is a matter of case probability, not [/i]class[/i] probability. In other words, it is an epistemically subjective probability. While Bayesian analysis attempts to be epistemically objective, this is not possible. It requires an epistemically subjective judgment as to how likely a phenomenon is, but this should not be claimed as an epistemically objective statistical analysis. If a person subjectively judges a phenomenon to be very unlikely, then it would easily negate legitimate effects. This also leads to the nonsense claim that “extraordinary claims require extraordinary evidence.” Hardly. They require standard levels of evidence, but more replication. Furthermore, scientific discoveries are by their very nature as unlikely, and such standards—based on epistemic subjectivity—would negate many discoveries. In short, Bayesian analysis parades itself as an epistemically objective statistical analysis but the prior probability introduces an epistemically subjective element of case probability. This is not to reject Bayesian analysis, but rather one must acknowledge this limitation. I don’t have the reference right off hand, but the now president of the American Statistical Association, Jessica Utts, published a paper pointing out this element of the Bayesian analysis that was supposed to refute the Bem paper. With a less biased prior, the Bayesian analysis closely matched the frequentist analysis found originally in the Bem paper.
When you hear analysis that sounds like “if you squint just right there is a teeny tiny effect size,” that already smacks of biased motivating reasoning. However, I need to provide evidence to support that assertion. Cohen’s effect sizes generally put a small effect size around 0.2 and a medium around 0.5. Psychology generally involves effect sizes of 0.2-0.3. This falls within the standard effect sizes found in standard psychology, and while it is indeed a small effect size, a small effect is still an effect. The point is that there should not be any effect at all and one is measured with high statistical significant and a high z score.
To also claim that our options are either to “rewrite all the physics textbooks” or that “Bem was a little sloppy in his research” is hyperbole. The physicist Henry Stapp has written on how this would not require the rewriting of physics:
http://www-physics.lbl.gov/~stapp/Reason01132012.doc
At most, it appears it would require slight modification. However, I would argue that this intervention constitutes a new domain and does not require anything to be rewritten, since the class probability found in quantum experiments is now a matter of case probability and strictly speaking does not constitute a statistical violation of orthodox quantum mechanics for this reason (this would also apply to the Radin experiments using the double slit and Michaelson interferometer devices)
This is all true. However, it must be noted that Bem’s experiments were essentially a replication of many previous experiments. As mentioned, he published a meta-analysis of 90 previous experiments (cited earlier), which actually indicates replicability. Additionally, statistical analysis from statisticians have confirmed there was no p-hacking, for example Utts. In other words, this was not a one-off experiment, but rather one of many previous that indicates replication.In the six years since Bem published his research there has been an increasing awareness of the potential problems in conducting rigorous scientific research, and not just in psychology research but in all of medicine and other areas as well. I have been carefully documenting both here and at SBM all the research that shows these problems – the problems with publication bias, p-hacking, and the failure to replicate.
This is, in fact, the central thesis of science-based medicine. It is too easy to manufacture false positive results, and there is too much incentive to do so and to publish such results. We need to tweak our incentives and filters, and take a more thorough look at the entire literature before we can arrive at reliable scientific conclusions.
He provides no citation, so I cannot comment without tracking it down. However, I am skeptical of his skepticism considering the earlier errors in his analysis.They presented their results last summer, at the most recent annual meeting of the Parapsychological Association. According to their pre-registered analysis, there was no evidence at all for ESP, nor was there any correlation between the attitudes of the experimenters—whether they were believers or skeptics when it came to psi—and the outcomes of the study. In summary, their large-scale, multisite, pre-registered replication ended in a failure.
This is an outright falsehood which indicates he is either not familiar with the evidence or misrepresenting it. However, it is most likely the former. The most obvious is the Ganzfeld experiments, which showed a replication rate replication rate ranging from 25 to 37 percent, with the highest rate corresponding to the most recent, higher quality studies, and the lower rates corresponding to the somewhat lower quality experiments. Note that this is also an analysis published in a mainstream peer-reviewed journal:The fact remains that no psi research protocol has withstood the test of time, and held up to rigorous replication. There is no psi effect that meets my criteria for being compelling: simultaneously showing a statistically significant effect that also shows a significant signal to noise ratio with highly rigorous experimental protocols that hold up to replication. You can get some of these features with psi, but never all of them.
Storm, Lance & Tressoldi, Patrizio & Di Risio, Lorenzo. (2010) "Meta-Analysis of Free-Response Studies, 1992–2008: Assessing the Noise Reduction Model in Parapsychology." Psychological bulletin. 136. 893. 10.1037/a0020840.
There are many other analyses, including Bem’s meta-analysis already mentioned and cited. However, I need to track down the presentation of the findings mentioned since he either didn’t make a proper citation or I am apparently blind as a bat and didn’t see it.
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
I would be forced to go with the non controversial consensus of expert opinion. Which is clear. If you aren't sure I recommend a meta analysis. Perhaps you could approach various world wide bodies and ask them?
Having said that I do have experience of pathological liars, mistakes, normal liars and plain incompetence. It is in part what prevents me from investing in Nigerian princes (even though I have no evidence they aren't).
Also I am happy to look at results. Telling me a dead relative is happy. Is meaningless. Bending a spoon with your mind is pointless. Being able to read minds but not being the king of the world is unconvincing. Clearly not all claims are equal. If I could read minds I can assure you that you would know about it.
I broke down the other day and was towed to the garage by not only a breakdown driver who was extremely hyper active but also an engineer working at Heathrow, a helicopter pilot, an owner of multiple million pound mansions in the local area and a successful buissness man who had issues with his partners. Admittedly all I saw was a breakdown driver, so I kind of assume he's a breakdown driver and a pathological liar. I'm not sure how I could function if I assumed otherwise.
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
Re: Artificial intelligence: doom or survival?
While I can appreciate the conservative approach to science, I did post a couple meta-analyses already (by Bem and Storm), although there are many more that have been published. I can provide citations if you wish. The consensus with meta-analyses even with outside statisticians is clear that there is an effect that needs explanation. The vast majority of those claiming it is debunked never address the mountain of research that exists at this point.Eduk wrote: ↑March 13th, 2018, 3:44 pm Me personally, I'm not a scientist. Especially I'm not a scientist concerned with the paranormal. Therefore I'm not in a position to give an expert analysis (neither are you).
I would be forced to go with the non controversial consensus of expert opinion. Which is clear. If you aren't sure I recommend a meta analysis. Perhaps you could approach various world wide bodies and ask them?
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
I disagree. Lame means weak, puny, feeble.The vast majority of those claiming it is debunked never address the mountain of research that exists at this point.
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
Re: Artificial intelligence: doom or survival?
Okay, what meta-analyses are you referring to?
Claiming the effect is weak doesn't mean there is no effect. The point is that there isn't supposed to be any effect.
- Count Lucanor
- Posts: 2318
- Joined: May 6th, 2017, 5:08 pm
- Favorite Philosopher: Umberto Eco
- Location: Panama
- Contact:
Re: Artificial intelligence: doom or survival?
In other words, you're saying that it's easy to spot, without complex manouvers, the absence of a scientific argument. Fine, because that's exactly what I did: I spotted the absence of a scientific argument against my claim that the self and the world are a material continuum.Frost wrote:I’ve been through enough of these exchanges to know it’s what you get when you have no scientific argument.
Fine, again. You're actually admitting that you're not trying to prove me wrong, at least not with science. But I still don't get what is your point then...Frost wrote:No one in science talks of “proof.” If you are requiring that there be proof, then you are not talking science.
I'll get into Mr. Bem's paper again, but first, let me correct your impression that I already bought the idea that statistical analysis is what hard science is all about and that all that it is required to embrace assertively or challenge its results is to master it. There's a major epistemological error here, because statistical analysis is not what hard science is all about. It actually represents the degradation of science, especially in the science of non-deterministic systems, where pure numbers not only don't tell the whole story, but actually contribute to hiding the real story, often by reducing the relevant information to quantifiable sets separated from all its complex determining factors. In replacement, spurious correlations based on the particular ideology of the researcher are offered.Frost wrote:The paper is a replication as is evidenced by the paper Bem also published that was a meta-analysis of 90 previous similar experiments which also established an effect. The effect was measured and it was very significant. Again a z score of 6.66 is a significant effect. If you understand statistics as you claim, then you should be stunned by this z score in this context.
An additional correction: when I said the significance of statistics, I meant the secondary role of statistics. It is well known that statistics are often misused to show the results the researchers expect, so they need to be considered carefully as a helping tool, not as the core of research method. A typical example is the correlation fallacy: http://www.tylervigen.com/spurious-correlationsFrost wrote: You do realize that typically a p value of 0.05 is the standard for finding an effect, right? I thought you claimed that you’re “familiar with the significance of statistics in empirical analysis in science.”
Now, about p-values and similar statistical concepts. When you say "a p value of 0.05 is the standard for finding an effect", you mean a standard and an effect in which field or cases? All of them? If I put an object in a vacuum, let it fall from a given distance 1000 times, measure its speed and find out that 53% of times it corresponded to the expected speed of gravity in Earth, will that tell you something about the strong likelihood of gravity affecting the object?
And out of Bem's study, how about the "hit rate on the nonerotic pictures (that) did not differ significantly from chance: 49.8%... true across all types of nonerotic pictures: neutral pictures, 49.6%; negative pictures, 51.3%; positive pictures, 49.4%; and romantic but nonerotic pictures, 50.2%..." Since the statistical inference is the same, what is the scientifically-tested explanation that makes these results not evidence of lacking psi abilities?
How about the rigor of the method and its replication: how many of these tests involved not two curtains, but three, four or five, so that it could be shown that the same results are achieved?
If you won't agree with me, perhaps you will agree with the journal editor's own admission that they are willing to "suspend beliefs about causality". So they say in the editorial comment that came with the article:Frost wrote: Psychology fails as a science? That is just pure nonsense.
"We openly admit that the reported findings conflict with our own beliefs about causality and that we find them extremely puzzling. Yet, as editors we were guided by the conviction that this paper—as strange as the findings may be—should be evaluated just as any other manuscript on the basis of rigorous peer review. Our obligation as journal editors is not to endorse particular hypotheses but to advance and stimulate science through a rigorous review process."
When trying to find again the Bem paper online, I accidentally hit another link. Perhaps not a lucky accident, but a premonition. Turns out it rolls out in detail my basic objections to the type of "science" represented in such paper (another premonition, I guess ).Frost wrote: Science uses statistical analysis which you apparently don’t believe in, for whatever reason since you never gave a scientific or mathematical reason. I guess we should just reject vast swaths of science and statistics that have permitted a tremendous amount of progress of knowledge. Please, then, tell me how science is really done.
https://replicationindex.wordpress.com/ ... gnition-a/
In summary:
- A highly controversial article. It made psychologists doubt other published findings in psychology.
- The article demonstrated fundamental flaws in the way social psychologists conducted and reported empirical studies.
- Other replication studies were carried out and did not get the same results. Some of them will not get published in the same journal for some fishy reasons: https://www.newscientist.com/article/dn ... cognition/
- Random numbers can provide evidence for any hypothesis, if they are selected for significance
- Bem's results are explained by bad research practices, such as selecting data that matches the researcher's criteria of success.
- Bem himself acknowledges that he gets "more credit for having started the revolution in questioning mainstream psychological methods...", which means, the so called "standards" mean nothing at this time.
Let me remind you that a list of articles is not an argument. It does not address any specific claim of mine with another specific claim or argument of yours. You could have constructed such arguments selecting the relevant information from the sources you thought helped your case, but you chose not to. Having no arguments of yours to address, I'm simply forced to comment on the general credibility of the studies referenced, since that seems to be your main point of contention: that woo woo is serious science, as if pseudosciences did not exist. If you have any specific argument other than that, please provide it.Frost wrote: What on earth are you talking about? When you say it is “without any specific content supporting an specific argument,” did you miss the part where I posted three research papers with multiple studies in a mainstream peer-reviewed physics journal? Try addressing the research instead of this blatant ad hominem attempt and what amounts to a “nuh uh” argument with no scientific or statistical arguments.
― Marcus Tullius Cicero
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
Re: Artificial intelligence: doom or survival?
I would like to try to condense this to address the few main points you made:We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10-10 with an effect size (Hedges’ g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 1.4 × 109, greatly exceeding the criterion value of 100 for “decisive evidence” in support of the experimental hypothesis (Jeffries, 1961). When Bem’s own experiments are excluded from the analysis, the combined effect size for replications by other investigators is 0.06, z = 4.16, p = 1.1 × 10-5, and the BF value is 3,853, again exceeding the criterion for “decisive evidence.” The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01, estimated with Orwin’s fail-safe N, is 544, and six of seven additional statistical tests support the conclusion that the database is not significantly compromised by selection bias, or “p-hacking”—the selective suppression of findings or statistical analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique (Simonsohn et al., 2014b), estimates the true effect size of our database to be 0.20, virtually identical to the mean effect size of Bem’s original experiments (0.22) and the closely related “presentiment” experiments (0.21).
"Highly controversial" is not a scientific argument.Count Lucanor wrote: ↑March 13th, 2018, 10:14 pm - A highly controversial article. It made psychologists doubt other published findings in psychology.
Such as? This is not a legitimate objection unless you specify exactly what flaws existed in a falsifiable manner so they are potentially refutable. You know, kinda like how I made an actual argument on Bayesian analysis from the helpful article posted by Eduk. Similarly, you go on about how statistics is flawed. You have to provide specific examples. Just giving a link on a correlation fallacy has nothing to do with the paper, unless you wish to attempt to make a specific argument which you have not done. You fail to provide an argument of any substance that can be addressed regarding methodology, statistics, etc.Count Lucanor wrote: ↑March 13th, 2018, 10:14 pm - The article demonstrated fundamental flaws in the way social psychologists conducted and reported empirical studies.
Yes, it was well known JPSP rejected a replication attempt, but it was rejected because many mainstream journals do not like to publish replications and prefer novel research. This is a problem. They went to another journal and that journal said they should publish it in the journal that published the original work Interesting enough, the Parapsychological Association has had it a policy since 1975 to publish replications or failed replications to reduce the file drawer effect since they have higher standards than psychology journals. That psychology and social science have a replication problem (and so does medical research) is attempting to be addressed. It's funny because parapsychology journals have far better standards and replications rates than these sciences and have addressed many of these issue long ago.Count Lucanor wrote: ↑March 13th, 2018, 10:14 pm - Other replication studies were carried out and did not get the same results. Some of them will not get published in the same journal for some fishy reasons: https://www.newscientist.com/article/dn ... cognition/
Multiple papers from a peer-reviewed mainstream physics journal is not an argument? Are you godamn kidding me? My claim is that this provides prima facie evidence that consciousness is not merely material. You're damned right posting research papers from a physics journal is an argument, which you continue to not address or provide any scientific argument against. Why can't you just consider the evidence and make an actual argument? Is it really that threatening to your worldview that you must react so irrationally?Count Lucanor wrote: ↑March 13th, 2018, 10:14 pm Let me remind you that a list of articles is not an argument. It does not address any specific claim of mine with another specific claim or argument of yours. You could have constructed such arguments selecting the relevant information from the sources you thought helped your case, but you chose not to. Having no arguments of yours to address, I'm simply forced to comment on the general credibility of the studies referenced, since that seems to be your main point of contention: that woo woo is serious science, as if pseudosciences did not exist. If you have any specific argument other than that, please provide it.
- Frost
- Posts: 511
- Joined: January 20th, 2018, 2:44 pm
Re: Artificial intelligence: doom or survival?
Mossbridge, Julia & Tressoldi, Patrizio & Utts, Jessica. (2012). Predictive Physiological Anticipation Preceding Seemingly Unpredictable Stimuli: A Meta-Analysis. Frontiers in psychology. 3. 390. 10.3389/fpsyg.2012.00390.
This one is interesting because it was published in the largest journal in its field, Frontiers in Psychology, and one of the authors was the president of the American Statistical Association in 2016, Jessica Utts (even funnier because I have her textbook on statistics). You really need to specify what she is doing wrong with her statistical analysis if you want to claim that these analyses are bogus:
Abstract
This meta-analysis of 26 reports published between 1978 and 2010 tests an unusal hypothesis: for stimuli of two or more types that are presented in an order designed to be unpredictable and that produce different post-stimulus physiological activity, the direction of pre-stimulus physiological activity reflects the direction of post-stimulus physiological activity, resulting in an unexplained anticipatory effect. The reports we examined used one of two paradigms: 1) randomly presented arousing vs. neutral stimuli, or 2) guessing tasks with feedback (correct vs. incorrect). Dependent variables included: electrodermal activity, heart rate, blood volume, pupil dilation, electroencephalographic activity (EEG), and blood oxygenation level dependent (BOLD) activity. To avoid including data hand-picked from multiple different analyses, no post-hoc experiments are considered. The results reveal a significant overall effect with a small effect size (random effects: overall [weighted] ES=0.21, 95%CI=0.13-0.29, z=5.3, p<5.7x10-8; fixed effects: overall ES=0.21, 95%CI=0.15-0.27, z=6.9, p<2.7x10-12). Higher quality experiments produce a quantitately larger effect size and a greater level of significance than lower quality studies. The number of contrary unpublished reports that would be necessary to reduce the level of significance to chance (p>0.05) was conservatively calculated to be 87 reports. We explore alternative explanations and examine the potential linkage between this unexplained anticipatory activity and other results demonstrating meaningful pre-stimulus activity preceding behaviourally relevant events. Multiple replications arising from different laboratories using the same methods are necessary to further examine this currently unexplained anticipatory activity. The cause of this anticipatory activity, which undoubtedly lies within the realm of natural physical processes (as opposed to supernatural or paranormal ones), remains to be determined.
Predictive Physiological Anticipation Preceding Seemingly Unpredictable Stimuli: A Meta-Analysis (PDF Download Available). Available from: https://www.researchgate.net/publicatio ... a-Analysis [accessed Mar 13 2018].
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
My point about weak effect was simply that if I do happen to be wrong it doesn't effect my day to day life. I don't have to worry, for example, that someone can read my pin code from my mind. Granted, if I am wrong, that might change one day but I'm not wrong.
-
- Posts: 658
- Joined: September 10th, 2017, 11:57 am
Re: Artificial intelligence: doom or survival?
-
- Posts: 2466
- Joined: December 8th, 2016, 7:08 am
- Favorite Philosopher: Socrates
Re: Artificial intelligence: doom or survival?
For example the espionage surrounding the Manhatten project shows how hard it is to keep a secret.
2024 Philosophy Books of the Month
2023 Philosophy Books of the Month
Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023
Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023