Search This Blog

Monday 14 November 2022

On studying smarter.

Studying 101: Study Smarter Not Harder 

University of north Carolina 

Do you ever feel like your study habits simply aren’t cutting it? Do you wonder what you could be doing to perform better in class and on exams? Many students realize that their high school study habits aren’t very effective in college. This is understandable, as college is quite different from high school. The professors are less personally involved, classes are bigger, exams are worth more, reading is more intense, and classes are much more rigorous. That doesn’t mean there’s anything wrong with you; it just means you need to learn some more effective study skills. Fortunately, there are many active, effective study strategies that are shown to be effective in college classes.


This handout offers several tips on effective studying. Implementing these tips into your regular study routine will help you to efficiently and effectively learn course material. Experiment with them and find some that work for you 

Reading is not studying 

Simply reading and re-reading texts or notes is not actively engaging in the material. It is simply re-reading your notes. Only ‘doing’ the readings for class is not studying. It is simply doing the reading for class. Re-reading leads to quick forgetting.


Think of reading as an important part of pre-studying, but learning information requires actively engaging in the material (Edwards, 2014). Active engagement is the process of constructing meaning from text that involves making connections to lectures, forming examples, and regulating your own learning (Davis, 2007). Active studying does not mean highlighting or underlining text, re-reading, or rote memorization. Though these activities may help to keep you engaged in the task, they are not considered active studying techniques and are weakly related to improved learning (Mackenzie, 1994).


Ideas for active studying include:


Create a study guide by topic. Formulate questions and problems and write complete answers. Create your own quiz.

Become a teacher. Say the information aloud in your own words as if you are the instructor and teaching the concepts to a class.

Derive examples that relate to your own experiences.

Create concept maps or diagrams that explain the material.

Develop symbols that represent concepts.

For non-technical classes (e.g., English, History, Psychology), figure out the big ideas so you can explain, contrast, and re-evaluate them.

For technical classes, work the problems and explain the steps and why they work.

Study in terms of question, evidence, and conclusion: What is the question posed by the instructor/author? What is the evidence that they present? What is the conclusion?

Organization and planning will help you to actively study for your courses. When studying for a test, organize your materials first and then begin your active reviewing by topic (Newport, 2007). Often professors provide subtopics on the syllabi. Use them as a guide to help organize your materials. For example, gather all of the materials for one topic (e.g., PowerPoint notes, text book notes, articles, homework, etc.) and put them together in a pile. Label each pile with the topic and study by topics.


For more information on the principle behind active studying, check out our tipsheet on metacognition. 

Understand the Study Cycle 

The Study Cycle, developed by Frank Christ, breaks down the different parts of studying: previewing, attending class, reviewing, studying, and checking your understanding. Although each step may seem obvious at a glance, all too often students try to take shortcuts and miss opportunities for good learning. For example, you may skip a reading before class because the professor covers the same material in class; doing so misses a key opportunity to learn in different modes (reading and listening) and to benefit from the repetition and distributed practice (see #3 below) that you’ll get from both reading ahead and attending class. Understanding the importance of all stages of this cycle will help make sure you don’t miss opportunities to learn effectively. 

Spacing out is good 

One of the most impactful learning strategies is “distributed practice”—spacing out your studying over several short periods of time over several days and weeks (Newport, 2007). The most effective practice is to work a short time on each class every day. The total amount of time spent studying will be the same (or less) than one or two marathon library sessions, but you will learn the information more deeply and retain much more for the long term—which will help get you an A on the final. The important thing is how you use your study time, not how long you study. Long study sessions lead to a lack of concentration and thus a lack of learning and retention.


In order to spread out studying over short periods of time across several days and weeks, you need control over your schedule. Keeping a list of tasks to complete on a daily basis will help you to include regular active studying sessions for each class. Try to do something for each class each day. Be specific and realistic regarding how long you plan to spend on each task—you should not have more tasks on your list than you can reasonably complete during the day.


For example, you may do a few problems per day in math rather than all of them the hour before class. In history, you can spend 15-20 minutes each day actively studying your class notes. Thus, your studying time may still be the same length, but rather than only preparing for one class, you will be preparing for all of your classes in short stretches. This will help focus, stay on top of your work, and retain information.


In addition to learning the material more deeply, spacing out your work helps stave off procrastination. Rather than having to face the dreaded project for four hours on Monday, you can face the dreaded project for 30 minutes each day. The shorter, more consistent time to work on a dreaded project is likely to be more acceptable and less likely to be delayed to the last minute. Finally, if you have to memorize material for class (names, dates, formulas), it is best to make flashcards for this material and review periodically throughout the day rather than one long, memorization session (Wissman and Rawson, 2012). See our handout on memorization strategies to learn more. 

It’s good to be intense 

Not all studying is equal. You will accomplish more if you study intensively. Intensive study sessions are short and will allow you to get work done with minimal wasted effort. Shorter, intensive study times are more effective than drawn out studying.


In fact, one of the most impactful study strategies is distributing studying over multiple sessions (Newport, 2007). Intensive study sessions can last 30 or 45-minute sessions and include active studying strategies. For example, self-testing is an active study strategy that improves the intensity of studying and efficiency of learning. However, planning to spend hours on end self-testing is likely to cause you to become distracted and lose your attention.


On the other hand, if you plan to quiz yourself on the course material for 45 minutes and then take a break, you are much more likely to maintain your attention and retain the information. Furthermore, the shorter, more intense sessions will likely put the pressure on that is needed to prevent procrastination. 

Silence isn’t golden 

Know where you study best. The silence of a library may not be the best place for you. It’s important to consider what noise environment works best for you. You might find that you concentrate better with some background noise. Some people find that listening to classical music while studying helps them concentrate, while others find this highly distracting. The point is that the silence of the library may be just as distracting (or more) than the noise of a gymnasium. Thus, if silence is distracting, but you prefer to study in the library, try the first or second floors where there is more background ‘buzz.’


Keep in mind that active studying is rarely silent as it often requires saying the material aloud. 

Problems are your friend 

Working and re-working problems is important for technical courses (e.g., math, economics). Be able to explain the steps of the problems and why they work.


In technical courses, it is usually more important to work problems than read the text (Newport, 2007). In class, write down in detail the practice problems demonstrated by the professor. Annotate each step and ask questions if you are confused. At the very least, record the question and the answer (even if you miss the steps).


When preparing for tests, put together a large list of problems from the course materials and lectures. Work the problems and explain the steps and why they work (Carrier, 2003). 

Reconsider multitasking 

A significant amount of research indicates that multi-tasking does not improve efficiency and actually negatively affects results (Junco, 2012).


In order to study smarter, not harder, you will need to eliminate distractions during your study sessions. Social media, web browsing, game playing, texting, etc. will severely affect the intensity of your study sessions if you allow them! Research is clear that multi-tasking (e.g., responding to texts, while studying), increases the amount of time needed to learn material and decreases the quality of the learning (Junco, 2012).


Eliminating the distractions will allow you to fully engage during your study sessions. If you don’t need your computer for homework, then don’t use it. Use apps to help you set limits on the amount of time you can spend at certain sites during the day. Turn your phone off. Reward intensive studying with a social-media break (but make sure you time your break!) See our handout on managing technology for more tips and strategies. 

Switch up your setting 

Find several places to study in and around campus and change up your space if you find that it is no longer a working space for you.


Know when and where you study best. It may be that your focus at 10:00 PM. is not as sharp as at 10:00 AM. Perhaps you are more productive at a coffee shop with background noise, or in the study lounge in your residence hall. Perhaps when you study on your bed, you fall asleep.


Have a variety of places in and around campus that are good study environments for you. That way wherever you are, you can find your perfect study spot. After a while, you might find that your spot is too comfortable and no longer is a good place to study, so it’s time to hop to a new spot! 

Become a teacher 

Try to explain the material in your own words, as if you are the teacher. You can do this in a study group, with a study partner, or on your own. Saying the material aloud will point out where you are confused and need more information and will help you retain the information. As you are explaining the material, use examples and make connections between concepts (just as a teacher does). It is okay (even encouraged) to do this with your notes in your hands. At first you may need to rely on your notes to explain the material, but eventually you’ll be able to teach it without your notes.


Creating a quiz for yourself will help you to think like your professor. What does your professor want you to know? Quizzing yourself is a highly effective study technique. Make a study guide and carry it with you so you can review the questions and answers periodically throughout the day and across several days. Identify the questions that you don’t know and quiz yourself on only those questions. Say your answers aloud. This will help you to retain the information and make corrections where they are needed. For technical courses, do the sample problems and explain how you got from the question to the answer. Re-do the problems that give you trouble. Learning the material in this way actively engages your brain and will significantly improve your memory (Craik, 1975). 

Take control of your calendar 

Controlling your schedule and your distractions will help you to accomplish your goals.


If you are in control of your calendar, you will be able to complete your assignments and stay on top of your coursework. The following are steps to getting control of your calendar:


On the same day each week, (perhaps Sunday nights or Saturday mornings) plan out your schedule for the week.

Go through each class and write down what you’d like to get completed for each class that week.

Look at your calendar and determine how many hours you have to complete your work.

Determine whether your list can be completed in the amount of time that you have available. (You may want to put the amount of time expected to complete each assignment.) Make adjustments as needed. For example, if you find that it will take more hours to complete your work than you have available, you will likely need to triage your readings. Completing all of the readings is a luxury. You will need to make decisions about your readings based on what is covered in class. You should read and take notes on all of the assignments from the favored class source (the one that is used a lot in the class). This may be the textbook or a reading that directly addresses the topic for the day. You can likely skim supplemental readings.

Pencil into your calendar when you plan to get assignments completed.

Before going to bed each night, make your plan for the next day. Waking up with a plan will make you more productive.

See our handout on calendars and college for more tips on using calendars as time management. 

Use downtime to your advantage 

Beware of ‘easy’ weeks. This is the calm before the storm. Lighter work weeks are a great time to get ahead on work or to start long projects. Use the extra hours to get ahead on assignments or start big projects or papers. You should plan to work on every class every week even if you don’t have anything due. In fact, it is preferable to do some work for each of your classes every day. Spending 30 minutes per class each day will add up to three hours per week, but spreading this time out over six days is more effective than cramming it all in during one long three-hour session. If you have completed all of the work for a particular class, then use the 30 minutes to get ahead or start a longer project. 

Works consulted 

Carrier, L. M. (2003). College students’ choices of study strategies. Perceptual and Motor Skills, 96(1), 54-56.


Craik, F. I., & Tulving, E. (1975). Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General, 104(3), 268.


Davis, S. G., & Gray, E. S. (2007). Going beyond test-taking strategies: Building self-regulated students and teachers. Journal of Curriculum and Instruction, 1(1), 31-47.


Edwards, A. J., Weinstein, C. E., Goetz, E. T., & Alexander, P. A. (2014). Learning and study strategies: Issues in assessment, instruction, and evaluation. Elsevier.


Junco, R., & Cotten, S. R. (2012). No A 4 U: The relationship between multitasking and academic performance. Computers & Education, 59(2), 505-514.


Mackenzie, A. M. (1994). Examination preparation, anxiety and examination performance in a group of adult students. International Journal of Lifelong Education, 13(5), 373-388.


McGuire, S.Y. & McGuire, S. (2016). Teach Students How to Learn: Strategies You Can Incorporate in Any Course to Improve Student Metacognition, Study Skills, and Motivation. Stylus Publishing, LLC.


Newport, C. (2006). How to become a straight-a student: the unconventional strategies real college students use to score high while studying less. Three Rivers Press.


Paul, K. (1996). Study smarter, not harder. Self Counsel Press.


Robinson, A. (1993). What smart students know: maximum grades, optimum learning, minimum time. Crown trade paperbacks.


Wissman, K. T., Rawson, K. A., & Pyc, M. A. (2012). How and when do students use flashcards? Memory, 20, 568-579

What is a woman?: Time for the jury to decide?

Unleash the Trial

 Lawyers to End Mutilation of Gender-Dysphoric Children 

Wesley J. Smith 

Many in the medical and political establishments are pushing “gender-affirming care” as the only humane means of treating children who believe they are not the sex they were born. This so-called care includes radical interventions such as puberty blocking, mastectomies, facial surgeries, and even genital removal. One recent study found that the median age for mastectomies in such cases is 16 — meaning that half of the girls whose breasts were cut off were under that age, and indeed, some were as young as twelve.


How do you stop such a destructive juggernaut? Lawyers! It seems to me that eventually suing doctors and others who pushed or cooperated with such drastic actions will become the equivalent for lawyers of the “Camp Lejeune” lawsuits currently proliferating and being advertised ubiquitously on television. 

It’s Already Starting 

This hoped-for remedial has already started in England, where a class-action lawsuit will soon be filed against a now closed youth gender clinic. And now, Americans who were subjected to such interventions while under age — and later “de-transitioned” to the sex they were born — may be thinking about suing.


One such case looks about to be brought by “Chloe,” who had a mastectomy while under age. From the “Notice of Intent to Sue” letter sent to doctors by her attorneys: 

Chloe is a biological female who suffered from a perceived psychological issue “gender dysphoria” beginning at 9 years of age. Under Defendants’ advice and supervision, between 13-17 years old Chloe underwent harmful transgender treatment, specifically, puberty blockers, offlabel cross-sex hormone treatment, and a double mastectomy. This radical, off-label, and inadequately studied course of chemical and surgical “treatment” for Chloe’s mental condition amounted to medical experimentation on Chloe.


As occurs in most gender dysphoria cases, Chloe’s psychological condition resolved on its own when she was close to reaching adulthood, and she no longer desires to identify as a male. Unfortunately, as a result of the so-called transgender “treatment” that Defendants performed on Chloe, she now has deep emotional wounds, severe regrets, and distrust for the medical system. Chloe has suffered physically, socially, neurologically, and psychologically. Among other harms, she has suffered mutilation to her body and lost social development with her peers at milestones that can never be reversed or regained.


Defendants coerced Chloe and her parents to undergo what amounted to a medical experiment by propagating two lies. First, Defendants falsely informed Chloe and her parents that Chloe’s gender dysphoria would not resolve unless Chloe socially and medical transitioned to appear more like a male. Second, Defendants also falsely informed Chloe and her parents that Chloe was at a high risk for suicide, unless she socially and medically transitioned to appear more like a male. Chloe has been informed by her parents that Defendants even gave them the ultimatum: “would you rather have a dead daughter or a live son?” 

But Is It All True? 

Whether that is true remains to be proven, but if credible evidence of such behavior is brought before a jury, it could eventually lead to Alex Jones–level damages being imposed against the entire gender-affirming medical/industrial complex.


Yes, I know many trial lawyers will be reluctant to face accusations of “transphobia.” But in my experience — as a once-practicing trial attorney and one who has written often about such practitioners — when the smell of money is in the water, ideology is generally not the first priority.


Time will tell. But in the meantime, go Chloe! And please, do not accept a confidential settlement. If you strike paydirt, the country needs to know, because that will deter further such “medical” interventions.

 

Sunday 13 November 2022

On O.B.Es

What Really Happens During an Out-of-Body Experience? 

Medically reviewed by Nicole Washington, DO, MPH — By Crystal Raypole — Updated on July 22, 2022 

An out-of-body experience is often described as feeling like you’ve left your physical body. There are many potential causes, including several medical conditions and experiences. 

An out-of-body experience (OBE) is a sensation of your consciousness leaving your body. These episodes are often reported by people who’ve had a near-death experience. Some might also describe an OBE as a dissociative episode.


People typically experience their sense of self inside their physical body. You most likely view the world around you from this vantage point. But during an OBE, you may feel as if you’re outside yourself, looking at your body from another perspective.


What really goes on during an OBE? Does your consciousness actually leave your body? Experts aren’t totally sure, but they have a few hunches, which we’ll get into later.

What does an OBE feel like? 

It’s hard to nail down what an OBE feels like, exactly.


According to accounts from people who’ve experienced them, they generally involve:


a feeling of floating outside your body

an altered perception of the world, such as looking down from a height

the feeling that you’re looking down at yourself from above

a sense that what’s happening is very real

OBEs typically happen without warning and usually don’t last for very long.


If you have a neurological condition, such as epilepsy, you may be more likelyTrusted Source to experience OBEs.They may also happen more frequently. But for many people, an OBE will happen very rarely, maybe only once in a lifetime if at all.


Some estimates suggest around 5 percent of people have experienced the sensations associated with an OBE, though some suggest this number may be higher. 

Does anything happen physically? 

There’s some debate over whether the sensations and perceptions associated with OBEs happen physically or as a sort of hallucinatory experience.


A recent 2022 reviewTrusted Source tried to explore this by evaluating a variety of studies and case reports evaluating consciousness, cognitive awareness, and recall in people who survived cardiac arrest.


They noted that some people report experiencing a separation from their body during resuscitation and some even reported an awareness of events they wouldn’t have seen from their actual perspective.


In addition, one study included in the review noted that two participants reported having both visual and auditory experiences while in cardiac arrest. Only one was well enough to follow up, but he gave an accurate, detailed description of what took place for about three minutes of his resuscitation from cardiac arrest.


Still, there’s no scientific evidence to support the idea that a person’s consciousness can actually travel outside the body. 

Veridical perception 

Veridical perception is a controversial concept. It refers to the idea that you can leave your body during an OBE, allowing you to witness something that you may not have otherwise.


Some anecdotal reports of this phenomena exist, with a few people even providingTrusted Source specific, accurate details about events that have happened during surgical procedures or while clinically dead.


Many people use these stories as evidence to support the existence of life after death.


However, the idea of veridicial perception is still limited to anecdotal claims and there is no research available to support it.


One older 2014 studyTrusted Source investigating the validity of veridical perception in people who had survived cardiac arrest found that neither of the two individuals who reported awareness during resuscitation were able to identify specific items that were only viewable from above. 

What can cause them? 

No one’s sure about the exact causes of OBEs, but experts have identified several possible explanations. 

Stress or trauma 

A frightening, dangerous, or difficult situation can provoke a fear response, which might cause you to dissociate from the situation and feel as if you’re an onlooker. This may make you feel as though you are watching the events from somewhere outside your body.


According to 2017 researchTrusted Source reviewing the experience of women in labor, OBEs during childbirth aren’t unusual.


The study didn’t specifically link OBEs to post-traumatic stress disorder, but the authors did point out that women who had OBEs had either gone through trauma during labor or another situation not related to childbirth.


This suggests that OBEs could occur as a way to cope with trauma, but more research is needed on this potential link. 

Medical conditions 

Experts have linked several medical and mental health conditions to OBEs, including:


epilepsy

migraine

cardiac arrest

brain injuries

depression

anxiety

Guillain-Barré syndrome

Dissociative disorders, particularly depersonalization-derealization disorder, can involve frequent feelings or episodes where you seem to be observing yourself from outside your body.


Sleep paralysis has also been noted as a possible cause of OBEs. It refers to a temporary state of waking paralysis that occurs during REM sleep and often involvesTrusted Source hallucinations.


Research suggestsTrusted Source many people who have OBEs with a near-death experience also often experience sleep paralysis.


In addition, a review of literature from 2020 suggests that sleep-wake disturbances may contributeTrusted Source to dissociative symptoms. This can include a feeling of leaving your body. 

Medication and drugs 

Some people report having an OBE while under the influence of anesthesia.


Other substances, including cannabis, ketamine, or hallucinogenic drugs such as LSD, can causeTrusted Source this sensation. 

Near-death experiences 

OBEs can occur during near-death experiences, often alongside other phenomena like flashbacks of previous memories or seeing a light at the end of a tunnel.


Though it’s not clear exactly why this happens, it’s believed to be caused by disruptions in certain areas of the brain involved with processing sensory information. A 2021 reviewTrusted Source suggests that these experiences may be more likely to occur during life threatening situations, which can include:


cardiac arrest

traumatic injury

brain hemorrhage

drowning

suffocation 

Strong G-forces 

Pilots and astronauts sometimes experience OBEs when strong gravitational forces, or G-forces, are encountered. This is because it causesTrusted Source blood to pool in the lower body, which can lead to loss of conscious and may induce an OBE.


Extreme G-forces can also causeTrusted Source spatial disorientation, peripheral vision loss, and disconnection between cognition and the ability to act. 


Paranormal 

Though not backed by research, some people believe that OBEs can occur when your soul or spirit leaves your body.


One form is known as “traveling clairvoyance,” which some mediums claim allows your soul to visit distant locations in order to gain information.


Others believe that certain meditative practices can help you reach a state of consciousness that transcends the body and mind, leading to an OBE.


Some people also experiment with astral projection, which is a spiritual practice that involves making an intentional effort to send your consciousness from your body toward a spiritual plane or dimension.


However, research as not been able to show that these practices cause OBEs. 

Other experiences 

OBEs might be able to be induced, intentionally or accidentally, by:


brain stimulation

sleep deprivation

sensory deprivation

hypnosis or meditative trance

However, additional research is still needed to support this. 

Do out-of-body experiences pose any risks? 

Existing research hasn’t connected experiencing spontaneous OBEs to any serious health risks. In some cases, you might feel a bit dizzy or disoriented after.


However, OBEs and dissociation in general can cause lingering feelings of emotional distress.


You might feel confused over what happened or wonder if you have a brain issue or mental health condition. You might also not like the sensation of an OBE and worry about it happening again.


Some people also claim that it’s possible for your consciousness to remain trapped outside of your body following an OBE, but there’s no evidence to support this. 

Should I see a doctor? 

Simply having an OBE doesn’t necessarily mean you need to see a healthcare professional. You may have this experience once just before drifting off to sleep, for example, and never again. If you don’t have any other symptoms, you probably don’t have any reason for concern.


If you feel uneasy about what happened, even if you don’t have any physical or psychological conditions, there’s no harm in mentioning the experience to a doctor. They may be able to help by ruling out serious conditions or offering some reassurance.


It’s also a good idea to talk with a healthcare professional if you’re having any sleep issues, including insomnia or symptoms of sleep paralysis, such as hallucinations 




Saturday 12 November 2022

Darwinists have got circular argumentation down to a science?

Evolution’s Circular Web of Self-Referencing Literature 

Cornelius Hunter 

Evolutionists believe evolution is true. As justification, they cite previous studies. But those previous studies were done by other evolutionists who, yes, believe evolution is true. The studies do not confirm evolution — they interpret the evidence according to evolutionary theory, no matter how much the evidence contradicts the theory. So, citing those previous studies does little to justify the belief in evolution.


It is a circular web of self-referencing literature. The blind lead the blind. Here is an example. For years Joe Thornton has been claiming proteins evolved. See, for instance, “Simple mechanisms for the evolution of protein complexity,” from Protein Science


As his starting point in the paper, Thornton cites several previous works, falsely claiming that they demonstrate evolution. One of his citations is a paper, “Protein folds, functions and evolution,” from 1999 when I was working on my doctorate in this are


This 1999 paper is cited to support the claim in the Thornton paper that “During the last ~3.8 billion years, evolution has generated proteins with thousands of different folds.” But the 1999 study demonstrates no such thing — not even close. Not controversial, no debate. This is simply a false citation. It is another example of the web of false, self-referencing literature.  

Another Citation 

Here is another citation in the Thornton paper: “Eye evolution and its functional basis,” by Dan Nilsson from 2013, in the journal Visual Neuroscience. This 2013 paper is cited to support the claim in the Thornton paper that the evolution of the vertebrate eye has been proven. But the 2013 Nilsson paper proves no such thing. Again, Nilsson takes evolution as his starting point. He presupposes evolution is true and works from there. Nowhere does Nilsson demonstrate that the evolution of the eye is likely or even could have occurred.


Nilsson has been doing this for years, going back to his 1994 paper, “A pessimistic estimate of the time required for an eye to evolve,” in Proceedings of the Royal Society B

Not Whether, but How Fast 

That 1994 paper explicitly stated (in the first paragraph) that the question is no longer whether the eye evolved, but how fast it evolved. Nonetheless, the paper was heavily promoted (and mischaracterized) by evolution promoter Richard Dawkins. For years after that, the paper was falsely cited as proof that the eye evolved, no question about it. If you like videos, Nilsson reviews his work in this 2019 presentation: 

Nilsson does very little original biology work. Instead, he offers evolutionary just-so stories. His work is something of a poster child for this false citation pseudoscience problem. The new Thornton paper is yet another example of how pervasive the problem is, and how vacuous is evolutionary science.


The formula goes like this: 1. Evolution is true. 2. Here’s how it must have happened. 3. Look, yet more proof of evolution.


This post is adapted from Dr. Hunter’s comments on Twitter.

 

 

The design filter can spot a dirty game?

Did Chess Ace Hans Niemann Cheat? A Design Detection Poser 

Evolution News @DiscoveryCSC 

On a new episode of ID the Future, mathematician William Dembski and host Eric Anderson explore whether design detection tools shed any light on the recent chess scandal involving world chess champion Magnus Carlsen and American grandmaster Hans Moke Niemann. Did Niemann cheat in a match where he beat Carlson, as some have claimed? There is no smoking gun in the case, so how might one determine if cheating occurred? At first glance the problem might seem far removed from the design detecting rules and tools Dembski laid out in his Cambridge University Press monograph The Design Inference. But actually there is some intriguing overlap. Is there a way to dig into the chess data and determine whether Niemann secretly used a computer chess engine to help him win the match? Tune in as Dembski and Anderson wrestle with the problem. Download the podcast or listen to it here. 



 

1914 : a marked year. II

Legacy of World War I  

BY HISTORY.COM EDITORS 

World War I Begins 

Convinced that Austria-Hungary was readying for war, the Serbian government ordered the Serbian army to mobilize and appealed to Russia for assistance. On July 28, Austria-Hungary declared war on Serbia, and the tenuous peace between Europe’s great powers quickly collapsed.


Within a week, Russia, Belgium, France, Great Britain and Serbia had lined up against Austria-Hungary and Germany, and World War I had begun. 

Legacy of World War I 

World War I brought about massive social upheaval, as millions of women entered the workforce to replace men who went to war and those who never came back. The first global war also helped to spread one of the world’s deadliest global pandemics, the Spanish flu epidemic of 1918, which killed an estimated 20 to 50 million people.


World War I has also been referred to as “the first modern war.” Many of the technologies now associated with military conflict—machine guns, tanks, aerial combat and radio communications—were introduced on a massive scale during World War I.


The severe effects that chemical weapons such as mustard gas and phosgene had on soldiers and civilians during World War I galvanized public and military attitudes against their continued use. The Geneva Convention agreements, signed in 1925, restricted the use of chemical and biological agents in warfare and remains in effect today.


Friday 11 November 2022

An exclusive category

Because my thirst for self flagellation apparently knows no bounds. I've been looking at what Christendom's theologians call the threeness oneness problem of the trinity, i.e how can God be three and yet one? The usual fudge is to state that term 'God' as applied to each of the three persons subsisting within the shared essence is an adjective and not a count noun but that the God in which they all simultaneously subsist and with whom/what(?) they are supposedly numerically identical is indeed a concrete reality. I reject the Characterisation of the issue. 

The issue is one of identity not primarily arithmetic

 according to the scripture. There is one God who is entitled to exclusive Devotion. 

Deuteronomy5:6,7ASV"6I am JEHOVAH thy God, who brought thee out of the land of Egypt, out of the house of bondage.


7Thou shalt have no other gods before me." 

Psalm83:18ASV"18That they may know that thou alone, whose name is Jehovah, Art the Most High over all the earth."

Thus there is a person who alone is entitled to our absolute devotion to the exclusion of all others i.e anyone/anything not identical to said person. The issue then is who is this person. Once we have identified this person all others would be excluded from the category of most high God by definition. The scriptures make it clear that this one is both a God and the God thus one cannot be identical to this one and not be a God as Christendom's theologians senselessly claim about the members of their triune God. In scripture ONLY the God of Jesus Christ i.e JEHOVAH is ever referred to as the God without qualification. And is definitely a God.

Deuteronomy5:9SSV"9thou shalt not bow down thyself unto them, nor serve them; for I, JEHOVAH, thy God, am A jealous God," 

Thus the claim that the God and Father of Jesus Christ is not a God in his own right is Falsified. Indeed he is the only God entitled to our absolute devotion.


J Robert Oppenheimer: a brief history.

 J. Robert Oppenheimer 

[note 1] (/ˈɒpənˌhaɪmər/; April 22, 1904 – February 18, 1967) was an American theoretical physicist. A professor of physics at the University of California, Berkeley, Oppenheimer was the wartime head of the Los Alamos Laboratory and is often credited as the "father of the atomic bomb" for his role in the Manhattan Project – the World War II undertaking that developed the first nuclear weapons. Oppenheimer was among those who observed the Trinity test in New Mexico, where the first atomic bomb was successfully detonated on July 16, 1945. He later remarked that the explosion brought to mind words from the Bhagavad Gita: "Now I am become Death, the destroyer of worlds."[2][note 2] In August 1945, the weapons were used in the atomic bombings of Hiroshima and Nagasaki. 

After the war ended, Oppenheimer became chairman of the influential General Advisory Committee of the newly created United States Atomic Energy Commission. He used that position to lobby for international control of nuclear power to avert nuclear proliferation and a nuclear arms race with the Soviet Union. He opposed the development of the hydrogen bomb during a 1949–1950 governmental debate on the question and subsequently took stances on defense-related issues that provoked the ire of some factions in the U.S. government and military. During the Second Red Scare, those stances, together with past associations Oppenheimer had with people and organizations affiliated with the Communist Party, led to him suffering the revocation of his security clearance in a much-written-about hearing in 1954. Effectively stripped of his direct political influence, he continued to lecture, write, and work in physics. Nine years later, President John F. Kennedy awarded (and Lyndon B. Johnson presented) him with the Enrico Fermi Award as a gesture of political rehabilitation.


Oppenheimer's achievements in physics included the Born–Oppenheimer approximation for molecular wave functions, work on the theory of electrons and positrons, the Oppenheimer–Phillips process in nuclear fusion, and the first prediction of quantum tunneling. With his students he also made important contributions to the modern theory of neutron stars and black holes, as well as to quantum mechanics, quantum field theory, and the interactions of cosmic rays. As a teacher and promoter of science, he is remembered as a founding father of the American school of theoretical physics that gained world prominence in the 1930s. After World War II, he became director of the Institute for Advanced Study in Princeton, New Jersey. 

Childhood and education

J. Robert Oppenheimer was born in New York City on April 22, 1904,[note 1][7] to Ella (née Friedman), a painter, and Julius Seligmann Oppenheimer, a wealthy textile importer. Born in Hanau, Hesse-Nassau, Prussia, Germany, Julius came to the United States as a teenager in 1888 with few resources, no money, no baccalaureate studies, and no knowledge of the English language. He was hired by a textile company and within a decade was an executive there, eventually becoming wealthy.[8] The Oppenheimers were both secular Ashkenazi Jews; his father was German Jewish, and his mother, who was from New York, descended from a German Jewish family that had lived in the U.S. since the 1840s.[9] In 1912, the family moved to an apartment on the 11th floor of 155 Riverside Drive, near West 88th Street, Manhattan, an area known for luxurious mansions and townhouses.[7] Their art collection included works by Pablo Picasso and Édouard Vuillard, and at least three original paintings by Vincent van Gogh.[10] Robert had a younger brother, Frank, who also became a physicist.[11]


Oppenheimer was initially educated at Alcuin Preparatory School; in 1911, he entered the Ethical Culture Society School.[12] This had been founded by Felix Adler to promote a form of ethical training based on the Ethical Culture movement, whose motto was "Deed before Creed". His father had been a member of the Society for many years, serving on its board of trustees from 1907 to 1915.[13] Oppenheimer was a versatile scholar, interested in English and French literature, and particularly in mineralogy.[14] He completed the third and fourth grades in one year and skipped half of the eighth grade.[12] During his final year, he became interested in chemistry.[15] He entered Harvard College one year after graduation, at age 18, because he suffered an attack of colitis while prospecting in Joachimstal during a family summer vacation in Europe. To help him recover from the illness, his father enlisted the help of his English teacher Herbert Smith who took him to New Mexico, where Oppenheimer fell in love with horseback riding and the southwestern United States.[16] 

Oppenheimer majored in chemistry, but Harvard required science students to also study history, literature, and philosophy or mathematics. He compensated for his late start by taking six courses each term and was admitted to the undergraduate honor society Phi Beta Kappa. In his first year, he was admitted to graduate standing in physics on the basis of independent study, which meant he was not required to take the basic classes and could enroll instead in advanced ones. He was attracted to experimental physics by a course on thermodynamics that was taught by Percy Bridgman. He graduated summa cum laude in three years.[17] 

Studies in Europe 

In 1924, Oppenheimer was informed that he had been accepted into Christ's College, Cambridge. He wrote to Ernest Rutherford requesting permission to work at the Cavendish Laboratory. Bridgman provided Oppenheimer with a recommendation, which conceded that Oppenheimer's clumsiness in the laboratory made it apparent his forte was not experimental but rather theoretical physics. Rutherford was unimpressed, but Oppenheimer went to Cambridge in the hope of landing another offer.[18] He was ultimately accepted by J. J. Thomson on condition that he complete a basic laboratory course.[19] He developed an antagonistic relationship with his tutor, Patrick Blackett, who was only a few years his senior. While on vacation, as recalled by his friend Francis Fergusson, Oppenheimer once confessed that he had left an apple doused with noxious chemicals on Blackett's desk. While Fergusson's account is the only detailed version of this event, Oppenheimer's parents were alerted by the university authorities who considered placing him on probation, a fate prevented by his parents successfully lobbying the authorities.[20]


Oppenheimer was a tall, thin chain smoker,[21] who often neglected to eat during periods of intense thought and concentration. Many of his friends described him as having self-destructive tendencies. A disturbing event occurred when he took a vacation from his studies in Cambridge to meet up with Fergusson in Paris. Fergusson noticed that Oppenheimer was not well. To help distract him from his depression, Fergusson told Oppenheimer that he (Fergusson) was to marry his girlfriend Frances Keeley. Oppenheimer did not take the news well. He jumped on Fergusson and tried to strangle him. Although Fergusson easily fended off the attack, the episode convinced him of Oppenheimer's deep psychological troubles. Throughout his life, Oppenheimer was plagued by periods of depression,[22][23] and he once told his brother, "I need physics more than friends".[24]


In 1926, Oppenheimer left Cambridge for the University of Göttingen to study under Max Born. Göttingen was one of the world's leading centers for theoretical physics. Oppenheimer made friends who went on to great success, including Werner Heisenberg, Pascual Jordan, Wolfgang Pauli, Paul Dirac, Enrico Fermi and Edward Teller. He was known for being too enthusiastic in discussion, sometimes to the point of taking over seminar sessions.[25] This irritated some of Born's other students so much that Maria Goeppert presented Born with a petition signed by herself and others threatening a boycott of the class unless he made Oppenheimer quiet down. Born left it out on his desk where Oppenheimer could read it, and it was effective without a word being said.[26]


He obtained his Doctor of Philosophy degree in March 1927 at age 23, supervised by Born.[27] After the oral exam, James Franck, the professor administering, reportedly said, "I'm glad that's over. He was on the point of questioning me."[4] Oppenheimer published more than a dozen papers at Göttingen, including many important contributions to the new field of quantum mechanics. He and Born published a famous paper on the Born–Oppenheimer approximation, which separates nuclear motion from electronic motion in the mathematical treatment of molecules, allowing nuclear motion to be neglected to simplify calculations. It remains his most cited work.[28] 

Early professional work 

Educational work

Oppenheimer was awarded a United States National Research Council fellowship to the California Institute of Technology (Caltech) in September 1927. Bridgman also wanted him at Harvard, so a compromise was reached whereby he split his fellowship for the 1927–28 academic year between Harvard in 1927 and Caltech in 1928.[29] At Caltech he struck up a close friendship with Linus Pauling, and they planned to mount a joint attack on the nature of the chemical bond, a field in which Pauling was a pioneer, with Oppenheimer supplying the mathematics and Pauling interpreting the results. Both the collaboration and their friendship ended when Pauling began to suspect Oppenheimer of becoming too close to his wife, Ava Helen Pauling. Once, when Pauling was at work, Oppenheimer had arrived at their home and invited Ava Helen to join him on a tryst in Mexico. Though she refused and reported the incident to her husband,[30] the invitation, and her apparent nonchalance about it, disquieted Pauling and he ended his relationship with Oppenheimer. Oppenheimer later invited him to become head of the Chemistry Division of the Manhattan Project, but Pauling refused, saying he was a pacifist.[31]


In the autumn of 1928, Oppenheimer visited Paul Ehrenfest's institute at the University of Leiden, the Netherlands, where he impressed by giving lectures in Dutch, despite having little experience with the language. There he was given the nickname of Opje,[32] later anglicized by his students as "Oppie".[33] From Leiden he continued on to the Swiss Federal Institute of Technology (ETH) in Zurich to work with Wolfgang Pauli on quantum mechanics and the continuous spectrum. Oppenheimer respected and liked Pauli and may have emulated his personal style as well as his critical approach to problems.[34] 

On returning to the United States, Oppenheimer accepted an associate professorship from the University of California, Berkeley, where Raymond T. Birge wanted him so badly that he expressed a willingness to share him with Caltech.[31]


Before he began his Berkeley professorship, Oppenheimer was diagnosed with a mild case of tuberculosis and spent some weeks with his brother Frank at a New Mexico ranch, which he leased and eventually purchased. When he heard the ranch was available for lease, he exclaimed, "Hot dog!", and later called it Perro Caliente, literally "hot dog" in Spanish.[35] Later he used to say that "physics and desert country" were his "two great loves".[36] He recovered from tuberculosis and returned to Berkeley, where he prospered as an advisor and collaborator to a generation of physicists who admired him for his intellectual virtuosity and broad interests. His students and colleagues saw him as mesmerizing: hypnotic in private interaction, but often frigid in more public settings. His associates fell into two camps: one that saw him as an aloof and impressive genius and aesthete, the other that saw him as a pretentious and insecure poseur.[37] His students almost always fell into the former category, adopting his walk, speech, and other mannerisms, and even his inclination for reading entire texts in their original languages.[38] Hans Bethe said of him: 

Probably the most important ingredient he brought to his teaching was his exquisite taste. He always knew what were the important problems, as shown by his choice of subjects. He truly lived with those problems, struggling for a solution, and he communicated his concern to the group. In its heyday, there were about eight or ten graduate students in his group and about six Post-doctoral Fellows. He met this group once a day in his office and discussed with one after another the status of the student's research problem. He was interested in everything, and in one afternoon they might discuss quantum electrodynamics, cosmic rays, electron pair production and nuclear physics.[39] 

He worked closely with Nobel Prize-winning experimental physicist Ernest O. Lawrence and his cyclotron pioneers, helping them understand the data their machines were producing at the Lawrence Berkeley National Laboratory.[40] In 1936, Berkeley promoted him to full professor at a salary of $3,300 a year (equivalent to $64,000 in 2021). In return he was asked to curtail his teaching at Caltech, so a compromise was reached whereby Berkeley released him for six weeks each year, enough to teach one term at Caltech.[41] 

Scientific work 

Oppenheimer did important research in theoretical astronomy (especially as related to general relativity and nuclear theory), nuclear physics, spectroscopy, and quantum field theory, including its extension into quantum electrodynamics. The formal mathematics of relativistic quantum mechanics also attracted his attention, although he doubted its validity. His work predicted many later finds, which include the neutron, meson and neutron star.[42]


Initially, his major interest was the theory of the continuous spectrum and his first published paper, in 1926, concerned the quantum theory of molecular band spectra. He developed a method to carry out calculations of its transition probabilities. He calculated the photoelectric effect for hydrogen and X-rays, obtaining the absorption coefficient at the K-edge. His calculations accorded with observations of the X-ray absorption of the sun, but not helium. Years later it was realized that the sun was largely composed of hydrogen and that his calculations were indeed correct.[43][44] 

Oppenheimer also made important contributions to the theory of cosmic ray showers and started work that eventually led to descriptions of quantum tunneling. In 1931, he co-wrote a paper on the "Relativistic Theory of the Photoelectric Effect" with his student Harvey Hall,[45] in which, based on empirical evidence, he correctly disputed Dirac's assertion that two of the energy levels of the hydrogen atom have the same energy. Subsequently, one of his doctoral students, Willis Lamb, determined that this was a consequence of what became known as the Lamb shift, for which Lamb was awarded the Nobel Prize in physics in 1955.[42]


With his first doctoral student, Melba Phillips, Oppenheimer worked on calculations of artificial radioactivity under bombardment by deuterons. When Ernest Lawrence and Edwin McMillan bombarded nuclei with deuterons they found the results agreed closely with the predictions of George Gamow, but when higher energies and heavier nuclei were involved, the results did not conform to the theory. In 1935, Oppenheimer and Phillips worked out a theory—now known as the Oppenheimer–Phillips process—to explain the results; this theory is still in use today.[46]


As early as 1930, Oppenheimer wrote a paper that essentially predicted the existence of the positron. This was after a paper by Paul Dirac proposed that electrons could have both a positive charge and negative energy. Dirac's paper introduced an equation, known as the Dirac equation, which unified quantum mechanics, special relativity and the then-new concept of electron spin, to explain the Zeeman effect.[47] Oppenheimer, drawing on the body of experimental evidence, rejected the idea that the predicted positively charged electrons were protons. He argued that they would have to have the same mass as an electron, whereas experiments showed that protons were much heavier than electrons. Two years later, Carl David Anderson discovered the positron, for which he received the 1936 Nobel Prize in Physics.[48]


In the late 1930s, Oppenheimer became interested in astrophysics, most likely through his friendship with Richard Tolman, resulting in a series of papers. In the first of these, a 1938 paper co-written with Robert Serber entitled "On the Stability of Stellar Neutron Cores",[49] Oppenheimer explored the properties of white dwarfs. This was followed by a paper co-written with one of his students, George Volkoff, "On Massive Neutron Cores",[50] in which they demonstrated that there was a limit, the so-called Tolman–Oppenheimer–Volkoff limit, to the mass of stars beyond which they would not remain stable as neutron stars and would undergo gravitational collapse. Finally, in 1939, Oppenheimer and another of his students, Hartland Snyder, produced a paper "On Continued Gravitational Contraction",[51] which predicted the existence of what are today known as black holes. After the Born–Oppenheimer approximation paper, these papers remain his most cited, and were key factors in the rejuvenation of astrophysical research in the United States in the 1950s, mainly by John A. Wheeler.[52]


Oppenheimer's papers were considered difficult to understand even by the standards of the abstract topics he was expert in. He was fond of using elegant, if extremely complex, mathematical techniques to demonstrate physical principles, though he was sometimes criticized for making mathematical mistakes, presumably out of haste. "His physics was good", said his student Snyder, "but his arithmetic awful".[42]


After World War II, Oppenheimer published only five scientific papers, one of which was in biophysics, and none after 1950. Murray Gell-Mann, a later Nobelist who, as a visiting scientist, worked with him at the Institute for Advanced Study in 1951, offered this opinion: 

He didn't have Sitzfleisch, 'sitting flesh,' when you sit on a chair. As far as I know, he never wrote a long paper or did a long calculation, anything of that kind. He didn't have patience for that; his own work consisted of little aperçus, but quite brilliant ones. But he inspired other people to do things, and his influence was fantastic.[53] 

Oppenheimer's diverse interests sometimes interrupted his focus on science. He liked things that were difficult, and since much of the scientific work appeared easy for him, he developed an interest in the mystical and the cryptic. In 1933, he learned Sanskrit and met the Indologist Arthur W. Ryder at Berkeley. He eventually read the Bhagavad Gita and the Upanishads in the original Sanskrit, and deeply pondered over them. He later cited the Gita as one of the books that most shaped his philosophy of life.[54][55]


His close confidant and colleague, Nobel Prize winner Isidor Rabi, later gave his own interpretation: 

Oppenheimer was overeducated in those fields, which lie outside the scientific tradition, such as his interest in religion, in the Hindu religion in particular, which resulted in a feeling of mystery of the universe that surrounded him like a fog. He saw physics clearly, looking toward what had already been done, but at the border he tended to feel there was much more of the mysterious and novel than there actually was ... [he turned] away from the hard, crude methods of theoretical physics into a mystical realm of broad intuition.[56] 

In spite of this, observers such as Nobel Prize-winning physicist Luis Alvarez have suggested that if he had lived long enough to see his predictions substantiated by experiment, Oppenheimer might have won a Nobel Prize for his work on gravitational collapse, concerning neutron stars and black holes.[57][58] In retrospect, some physicists and historians consider this to be his most important contribution, though it was not taken up by other scientists in his own lifetime.[59] The physicist and historian Abraham Pais once asked Oppenheimer what he considered to be his most important scientific contributions; Oppenheimer cited his work on electrons and positrons, not his work on gravitational contraction.[60] Oppenheimer was nominated for the Nobel Prize for physics three times, in 1946, 1951 and 1967, but never won.[61][62] 

Los Alamos 

On October 9, 1941, two months before the United States entered World War II, President Franklin D. Roosevelt approved a crash program to develop an atomic bomb.[91] In May 1942, National Defense Research Committee Chairman James B. Conant, who had been one of Oppenheimer's lecturers at Harvard, invited Oppenheimer to take over work on fast neutron calculations, a task that Oppenheimer threw himself into with full vigor. He was given the title "Coordinator of Rapid Rupture", which specifically referred to the propagation of a fast neutron chain reaction in an atomic bomb. One of his first acts was to host a summer school for bomb theory at his building in Berkeley. The mix of European physicists and his own students—a group including Robert Serber, Emil Konopinski, Felix Bloch, Hans Bethe and Edward Teller—kept themselves busy by calculating what needed to be done, and in what order, to make the bomb.[92] 

In June 1942, the US Army established the Manhattan Project to handle its part in the atom bomb project and began the process of transferring responsibility from the Office of Scientific Research and Development to the military.[94] In September, Groves was appointed director of what became known as the Manhattan Project.[95] He selected Oppenheimer to head the project's secret weapons laboratory. This was a choice that surprised many because Oppenheimer had left-wing political views and no record as a leader of large projects. Groves was concerned by the fact that Oppenheimer did not have a Nobel Prize and might not have had the prestige to direct fellow scientists.[96] However, he was impressed by Oppenheimer's singular grasp of the practical aspects of designing and constructing an atomic bomb, and by the breadth of his knowledge. As a military engineer, Groves knew that this would be vital in an interdisciplinary project that would involve not just physics, but chemistry, metallurgy, ordnance and engineering. Groves also detected in Oppenheimer something that many others did not, an "overweening ambition" that Groves reckoned would supply the drive necessary to push the project to a successful conclusion. Isidor Rabi considered the appointment "a real stroke of genius on the part of General Groves, who was not generally considered to be a genius".[97]


Oppenheimer and Groves decided that for security and cohesion they needed a centralized, secret research laboratory in a remote location. Scouting for a site in late 1942, Oppenheimer was drawn to New Mexico, not far from his ranch. On November 16, 1942, Oppenheimer, Groves and others toured a prospective site. Oppenheimer feared that the high cliffs surrounding the site would make his people feel claustrophobic, while the engineers were concerned with the possibility of flooding. He then suggested and championed a site that he knew well: a flat mesa near Santa Fe, New Mexico, which was the site of a private boys' school called the Los Alamos Ranch School. The engineers were concerned about the poor access road and the water supply but otherwise felt that it was ideal.[98] The Los Alamos Laboratory was built on the site of the school, taking over some of its buildings, while many new buildings were erected in great haste. At the laboratory, Oppenheimer assembled a group of the top physicists of the time, which he referred to as the "luminaries".[99]


Los Alamos was initially supposed to be a military laboratory, and Oppenheimer and other researchers were to be commissioned into the Army. He went so far as to order himself a lieutenant colonel's uniform and take the Army physical test, which he failed. Army doctors considered him underweight at 128 pounds (58 kg), diagnosed his chronic cough as tuberculosis and were concerned about his chronic lumbosacral joint pain.[100] The plan to commission scientists fell through when Robert Bacher and Isidor Rabi balked at the idea. Conant, Groves, and Oppenheimer devised a compromise whereby the laboratory was operated by the University of California under contract to the War Department.[101] It soon turned out that Oppenheimer had hugely underestimated the magnitude of the project; Los Alamos grew from a few hundred people in 1943 to over 6,000 in 1945.[100]


Oppenheimer at first had difficulty with the organizational division of large groups, but rapidly learned the art of large-scale administration after he took up permanent residence on the mesa. He was noted for his mastery of all scientific aspects of the project and for his efforts to control the inevitable cultural conflicts between scientists and the military. He was an iconic figure to his fellow scientists, as much a symbol of what they were working toward as a scientific director. Victor Weisskopf put it thus: 

Oppenheimer directed these studies, theoretical and experimental, in the real sense of the words. Here his uncanny speed in grasping the main points of any subject was a decisive factor; he could acquaint himself with the essential details of every part of the work. He did not direct from the head office. He was intellectually and physically present at each decisive step. He was present in the laboratory or in the seminar rooms, when a new effect was measured, when a new idea was conceived. It was not that he contributed so many ideas or suggestions; he did so sometimes, but his main influence came from something else. It was his continuous and intense presence, which produced a sense of direct participation in all of us; it created that unique atmosphere of enthusiasm and challenge that pervaded the place throughout its time.[102] 

At this point in the war, there was considerable anxiety among the scientists that the Germans might be making faster progress on an atomic weapon than they were.[103][104] In a letter dated May 25, 1943, Oppenheimer responded to a proposal from Fermi to use radioactive materials to poison German food supplies. Oppenheimer asked Fermi whether he could produce enough strontium without letting too many in on the secret. Oppenheimer continued, "I think we should not attempt a plan unless we can poison food sufficient to kill a half a million men."[105] 

In 1943 development efforts were directed to a plutonium gun-type fission weapon called "Thin Man". Initial research on the properties of plutonium was done using cyclotron-generated plutonium-239, which was extremely pure but could only be created in tiny amounts. When Los Alamos received the first sample of plutonium from the X-10 Graphite Reactor in April 1944 a problem was discovered: reactor-bred plutonium had a higher concentration of plutonium-240, making it unsuitable for use in a gun-type weapon.[106] In July 1944, Oppenheimer abandoned the gun design in favor of an implosion-type weapon. Using chemical explosive lenses, a sub-critical sphere of fissile material could be squeezed into a smaller and denser form. The metal needed to travel only very short distances, so the critical mass would be assembled in much less time.[107] In August 1944 Oppenheimer implemented a sweeping reorganization of the Los Alamos laboratory to focus on implosion.[108] He concentrated the development efforts on the gun-type device, a simpler design that only had to work with uranium-235, in a single group, and this device became Little Boy in February 1945.[109] After a mammoth research effort, the more complex design of the implosion device, known as the "Christy gadget" after Robert Christy, another student of Oppenheimer's,[110] was finalized in a meeting in Oppenheimer's office on February 28, 1945.[111]


In May 1945 an Interim Committee was created to advise and report on wartime and postwar policies regarding the use of nuclear energy. The Interim Committee in turn established a scientific panel consisting of Arthur Compton, Fermi, Lawrence and Oppenheimer to advise it on scientific issues. In its presentation to the Interim Committee, the scientific panel offered its opinion not just on the likely physical effects of an atomic bomb, but on its likely military and political impact.[112] This included opinions on such sensitive issues as whether or not the Soviet Union should be advised of the weapon in advance of its use against Japan.[113] 

Trinity 

The joint work of the scientists at Los Alamos resulted in the world's first nuclear explosion, near Alamogordo, New Mexico, on July 16, 1945. Oppenheimer had given the site the codename "Trinity" in mid-1944 and said later that it was from one of John Donne's Holy Sonnets. According to the historian Gregg Herken, this naming could have been an allusion to Jean Tatlock, who had committed suicide a few months previously and had in the 1930s introduced Oppenheimer to Donne's work.[115]


Oppenheimer later recalled that, while witnessing the explosion, he thought of a verse from the Bhagavad Gita (XI,12): divi sūrya-sahasrasya bhaved yugapad utthitā yadi bhāḥ sadṛṥī sā syād bhāsas tasya mahātmanaḥ[116] 

If the radiance of a thousand suns were to burst at once into the sky, that would be like the splendor of the mighty one ...[5][117] 

Years later he would explain that another verse had also entered his head at that time: namely, the famous verse: "kālo'smi lokakṣayakṛtpravṛddho lokānsamāhartumiha pravṛttaḥ" (XI,32),[118] which he translated as "I am become Death, the destroyer of worlds."[note 2]


In 1965, when he was persuaded to quote again for a television broadcast, he said: 

We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita; Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, 'Now I am become Death, the destroyer of worlds.' I suppose we all thought that, one way or another.[3] 

Among those present with Oppenheimer in the control bunker at the site were his brother Frank and Brigadier General Thomas Farrell. When Jeremy Bernstein asked Frank what Robert's first words after the test had been, the answer was "I guess it worked."[119] Farrell summarized Robert's reaction as follows: 

Dr. Oppenheimer, on whom had rested a very heavy burden, grew tenser as the last seconds ticked off. He scarcely breathed. He held on to a post to steady himself. For the last few seconds, he stared directly ahead and then when the announcer shouted "Now!" and there came this tremendous burst of light followed shortly thereafter by the deep growling roar of the explosion, his face relaxed into an expression of tremendous relief.[120] 

Physicist Isidor Rabi noticed Oppenheimer's disconcerting triumphalism: "I'll never forget his walk; I'll never forget the way he stepped out of the car ... his walk was like High Noon ... this kind of strut. He had done it."[121] At an assembly at Los Alamos on August 6 (the evening of the atomic bombing of Hiroshima), Oppenheimer took to the stage and clasped his hands together "like a prize-winning boxer" while the crowd cheered. He noted his regret the weapon had not been available in time to use against Nazi Germany.[122] However, he and many of the project staff were very upset about the bombing of Nagasaki, as they did not feel the second bomb was necessary from a military point of view.[123] He traveled to Washington on August 17 to hand-deliver a letter to Secretary of War Henry L. Stimson expressing his revulsion and his wish to see nuclear weapons banned.[124] In October 1945 Oppenheimer was granted an interview with President Harry S. Truman. The meeting, however, went badly, after Oppenheimer remarked he felt he had "blood on my hands". The remark infuriated Truman and put an end to the meeting. Truman later told his Undersecretary of State Dean Acheson "I don't want to see that son-of-a-bitch in this office ever again."[125]


For his services as director of Los Alamos, Oppenheimer was awarded the Medal for Merit from President Harry S. Truman in 1946.[126] 

Final years and death 

The frontiers of science are separated now by long years of study, by specialized vocabularies, arts, techniques, and knowledge from the common heritage even of a most civilized society; and anyone working at the frontier of such science is in that sense a very long way from home, a long way too from the practical arts that were its matrix and origin, as indeed they were of what we today call art.


Robert Oppenheimer, "Prospects in the Arts and Sciences" in Man's Right to Knowledge[220] 

Starting in 1954, Oppenheimer lived for several months of the year on the island of Saint John in the U.S. Virgin Islands. In 1957, he purchased a 2-acre (0.81 ha) tract of land on Gibney Beach, where he built a spartan home on the beach.[221] He spent a considerable amount of time sailing with his daughter Toni and wife Kitty.[222]


Oppenheimer's first public appearance following the stripping of his security clearance was a lecture titled "Prospects in the Arts and Sciences" for the Columbia University Bicentennial radio show Man's Right to Knowledge, in which he outlined his philosophy and his thoughts on the role of science in the modern world.[223][224] He had been selected for the final episode of the lecture series two years prior to the security hearing, though the university remained adamant that he stay on even after the controversy.[225]


In February 1955, the president of the University of Washington, Henry Schmitz, abruptly cancelled an invitation to Oppenheimer to deliver a series of lectures there. Schmitz's decision caused an uproar among the students; 1,200 of them signed a petition protesting the decision, and Schmitz was burned in effigy. While they marched in protest, the state of Washington outlawed the Communist Party, and required all government employees to swear a loyalty oath. Edwin Albrecht Uehling, the chairman of the physics department and a colleague of Oppenheimer's from Berkeley, appealed to the university senate, and Schmitz's decision was overturned by a vote of 56-40. Oppenheimer stopped briefly in Seattle to change planes on a trip to Oregon, and was joined for coffee during his layover by several University of Washington faculty, but Oppenheimer never lectured there.[226][227] 

Oppenheimer was increasingly concerned about the potential danger that scientific inventions could pose to humanity. He joined with Albert Einstein, Bertrand Russell, Joseph Rotblat and other eminent scientists and academics to establish what would eventually, in 1960, become the World Academy of Art and Science. Significantly, after his public humiliation, he did not sign the major open protests against nuclear weapons of the 1950s, including the Russell–Einstein Manifesto of 1955, nor, though invited, did he attend the first Pugwash Conferences on Science and World Affairs in 1957.[228]


In his speeches and public writings, Oppenheimer continually stressed the difficulty of managing the power of knowledge in a world in which the freedom of science to exchange ideas was more and more hobbled by political concerns. Oppenheimer delivered the Reith Lectures on the BBC in 1953, which were subsequently published as Science and the Common Understanding.[229] In 1955 Oppenheimer published The Open Mind, a collection of eight lectures that he had given since 1946 on the subject of nuclear weapons and popular culture. Oppenheimer rejected the idea of nuclear gunboat diplomacy. "The purposes of this country in the field of foreign policy", he wrote, "cannot in any real or enduring way be achieved by coercion". In 1957 the philosophy and psychology departments at Harvard invited Oppenheimer to deliver the William James Lectures. An influential group of Harvard alumni led by Edwin Ginn that included Archibald Roosevelt protested against the decision.[230] Some 1,200 people packed into Sanders Theatre to hear Oppenheimer's six lectures, entitled "The Hope of Order".[228] Oppenheimer delivered the Whidden Lectures at McMaster University in 1962, and these were published in 1964 as The Flying Trapeze: Three Crises for Physicists.[231]Deprived of political power, Oppenheimer continued to lecture, write and work on physics. He toured Europe and Japan, giving talks about the history of science, the role of science in society, and the nature of the universe.[232] In September 1957, France made him an Officer of the Legion of Honor,[233] and on May 3, 1962, he was elected a Foreign Member of the Royal Society in Britain.[234][235] At the urging of many of Oppenheimer's political friends who had ascended to power, President John F. Kennedy awarded Oppenheimer the Enrico Fermi Award in 1963 as a gesture of political rehabilitation. Edward Teller, the winner of the previous year's award, had also recommended Oppenheimer receive it, in the hope that it would heal the rift between them.[236] A little over a week after Kennedy's assassination, his successor, President Lyndon Johnson, presented Oppenheimer with the award, "for contributions to theoretical physics as a teacher and originator of ideas, and for leadership of the Los Alamos Laboratory and the atomic energy program during critical years".[237] Oppenheimer told Johnson: "I think it is just possible, Mr. President, that it has taken some charity and some courage for you to make this award today."[238]


The rehabilitation implied by the award was partly symbolic, as Oppenheimer still lacked a security clearance and could have no effect on official policy, but the award came with a $50,000 tax-free stipend, and its award outraged many prominent Republicans in Congress. The late President Kennedy's widow Jacqueline, still living in the White House, made it a point to meet with Oppenheimer to tell him how much her husband had wanted him to have the medal.[239] While still a senator in 1959, Kennedy had been instrumental in voting to narrowly deny Oppenheimer's enemy Lewis Strauss a coveted government position as Secretary of Commerce, effectively ending Strauss's political career. This was partly due to lobbying by the scientific community on behalf of Oppenheimer.[240] 

Oppenheimer was a chain smoker who was diagnosed with throat cancer in late 1965. After inconclusive surgery, he underwent unsuccessful radiation treatment and chemotherapy late in 1966.[241] He fell into a coma on February 15, 1967, and died at his home in Princeton, New Jersey, on February 18, aged 62. A memorial service was held a week later at Alexander Hall on the campus of Princeton University. The service was attended by 600 of his scientific, political and military associates that included Bethe, Groves, Kennan, Lilienthal, Rabi, Smyth and Wigner. His brother Frank and the rest of his family were also there, as was the historian Arthur M. Schlesinger, Jr., the novelist John O'Hara, and George Balanchine, the director of the New York City Ballet. Bethe, Kennan and Smyth gave brief eulogies.[242] Oppenheimer's body was cremated and his ashes were placed into an urn. His wife Kitty took the ashes to St. John and dropped the urn into the sea, within sight of the beach house.[243]


In October 1972, Kitty died aged 62 from an intestinal infection that was complicated by a pulmonary embolism. Oppenheimer's ranch in New Mexico was then inherited by their son Peter, and the beach property was inherited by their daughter Katherine "Toni" Oppenheimer Silber. Toni was refused security clearance for her chosen vocation as a United Nations translator after the FBI brought up the old charges against her father. In January 1977 (three months after the end of her second marriage), she committed suicide aged 32; her ex-husband found her hanging from a beam in her family beach house.[244] She left the property to "the people of St. John for a public park and recreation area".[245] The original house was built too close to the coast and succumbed to a hurricane. Today the Virgin Islands Government maintains a Community Center in the area.[246]

Alas for Darwinism: the fossil record's gonna fossil record. II

 Fossil Friday: The Complex Wing Folding of Earwigs 

Günter Bechly 

Today’s featured fossil, an earwig, is the paratype specimen of Cratoborellia gorbi, which I found and photographed at a German trader’s collection in July 2006, where I also discovered the holotype that is deposited in the collection of the Stuttgart Natural History Museum and was described by my fellow student Fabian Haas (Haas 2007). The fossil belongs to the living earwig family Anisolabididae and is three-dimensionally preserved as iron oxide-hydroxide (Goethite) in the Lower Cretaceous (115 million years old) laminated limestone of the Crato Formation from northeast Brazil. It is one of the very few fossil earwig specimens with spread hind wing, and documents a very similar pattern of wing folding to its living relatives.


Lay people may hardly be aware that many earwigs indeed have wings and can fly, as they only rarely do. However, they not only do possess wings, but also have very sophisticated adaptations in their construction. Just like beetles, they have hard forewings that serve as protective flaps (elytrae), while the hind wings fold in a complex way beneath the forewings (they even use their pincers to assist in the folding of the wings). 

Another Example of Convergence 

This is another example of striking convergence in the animal kingdom. These convergent adaptations can be traced back to the earliest known putative stem earwigs (Protelytroptera) from the Permian period about 299-252 million years ago (Haas & Kukalová-Peck 2001, Bethoux et al. 2016). Earwig wings not only fold like a fan in longitudinal direction, but additionally along a row of flexible patches in a transverse direction (Haas et al. 2000). This kind of natural origami is stunning and beautifully illustrated in a YouTube video by the ETH Zurich University (below), where researchers copied this design principle for biomimetic technology that could be used for foldable solar sails in space. 

This highly complex mode of wing folding is one of the many examples of engineering marvels in insects that strongly suggest intelligent design as superior explanation to blind evolution. 

References 

Bethoux O, Llamosi A & Toussaint S 2016. Reinvestigation of Protelytron permianum (Insecta; Early Permian; USA) as an example for applying reflectance transformation imaging to insect imprint fossils. Fossil Record 20, 1–7. DOI: https://doi.org/10.5194/fr-20-1-2016.

Haas F 2007. Dermaptera: earwigs. Chapter 11.6, pp. 222–234 in: Martill DM, Bechly G & Loveridge RF (eds). The Crato Fossil Beds of Brazil. Cambridge University Press, Cambridge (UK), xvi+625 pp.

Haas F, Gorb SN & Wootton RJ 2000. Elastic joints in dermapteran hind wings: materials and wing folding. Arthropod Structure and Development 29(2), 137–146. DOI: https://doi.org/10.1016/S1467-8039(00)00025-6.

Haas F & Kukalová-Peck J 2001. Dermaptera hindwing structure and folding, new evidence for superordinal relationship within Neoptera (Insecta). European Journal of Entomology98(4), 445–509. DOI: https://doi.org/10.14411/eje.2001.065.



Whither the bright line between artificial and natural causation?

More Unnatural Naturalism, and More Confusion from Naturalists 

David Coppedge 

Yesterday I commented on the conundrums created for evolutionists by engineering. Once you start looking, you’ll frequently see the problem facing naturalists about natural and unnatural causes. Writing in City Journal, for example, science reporter Nicholas Wade assumed that “natural” causes could be distinguished from “manipulated” actions in the case of the origin of SARS-Coronavirus-2: 

Two hypotheses have long been on the table. One is that the virus jumped naturally from some animal host, as many epidemics have done in the past. The other is that it escaped from a lab in Wuhan, where researchers are known to have been genetically manipulating bat viruses in order to predict future epidemics. Both hypotheses are plausible but, so far, no direct evidence exists for either. 

News from CORDIS via Phys.org again illustrates the distinction between natural activities of humans and their intentional, purposeful designs. The article, “When did humans start using roads?”, says this: 

But when did humans actually begin to use roads? “The generic and honest answer is that it’s really hard to know,” says Kalayci. “First, we have to be very clear in our mind what we mean by ‘road’ — are we talking about an engineered road, or a simple dirt track that has naturally formed by people and/or animals constantly walking along the same line?”


In the case of the latter, one can argue, rather philosophically, that as soon as humans learnt to walk and began to traverse the world from their African homelands, roads began to form — in short, a road can be conceived as merely a line that humans continuously wander along.


But Kalayci informs us that it was probably the ancient Egyptians that purposely went out of their way to build the first paved roads, when they were busy building pyramids and other monuments, sometime between 2600 and 2200 BCE, during the Old Kingdom Period. “They essentially wanted a nice, easy, straight route between the monument site and quarry that allowed materials to be transported quickly and efficiently,” he explains. 

Hikers know that animals like bighorn sheep consistently re-use paths in their natural habitats. This quote, though, shows something different about humans. They “purposely” sometimes go “out of their way” to build monuments that are not essential to mere survival, and think about ways to move materials “quickly and efficiently.” They employ mathematics to build geometric objects for purposes that they believe transcend physical existence. 

“Natural” Organisms Are Oblivious to Human Design 

"If the art of ship-building were in the wood,” Aristotle recognized, “ships would exist by nature.” We humans know the intelligent causation, foresight, and intentionality required to build a floating craft able to carry cargo that left to its natural state would sink to the bottom of the sea. Flotsam can drift by nature, but something other than nature is required to design something capable of navigating a chosen course against natural wind and waves, using manufactured sails and oars. 


Ships can, however, sink “by nature” (e.g., due to storms, accidents, entropy). Now “millions of shipwrecks in the world’s oceans, each providing a potentially new habitat for sea life,” states a news item at Frontiers in Science. The bacteria and fish that find habitats in shipwrecks don’t care. They treat them like other “natural” habitats. Only humans know or care. 

Wooden shipwrecks provide microbial habitats similar to naturally occurring geological seabed structures, reports a new study in Frontiers in Marine Science…. Microbes are at the base of ocean food chains, and this is among the first research to show the impact of human activities–like shipwrecks–on these environments.


“Microbial communities are important to be aware of and understand because they provide early and clear evidence of how human activities change life in the ocean,” said corresponding author Dr Leila Hamdan of the University of Southern Mississippi.


“Ocean scientists have known that natural hard habitats, some of which have been present for hundreds to thousands of years shape the biodiversity of life on the seafloor. This work is the first to show that built habitats (places or things made or modified by humans) impact the films of microbes (biofilms) coating these surfaces as well. These biofilms are ultimately what enable hard habitats to transform into islands of biodiversity.” 

Is Animal Engineering the Same as Human Engineering? 

To round out this discussion of natural versus unnatural causes, we need to investigate how reporters treat cases of animal engineering. For example, the journal Nature discussed “how bees achieve an engineering marvel: the honeycomb.” In a similar vein, news from Texas A&M tells about research “Determining how and why cells make decisions.” Isn’t decision-making a mental, purposeful activity? Isn’t engineering a honeycomb an example of intentional work for a purpose?


Well, yes and no. The answers can be elucidated with another question: is there a distinction between a software programmer and the program he or she designed? Honeybees and cells have a limited set of options that are programmed into their genomes. It could be considered “unnatural” for a honeybee to gather ingredients and build hexagons in which the queen’s eggs can be nourished. Rock and soil would never do that. The bee must apply directed work against entropy to pull it off. The cells in an embryo “make decisions” based on pre-programmed responses to signals. These can be considered “natural” activities in the same way a robot on a car assembly line is performing the “natural” function it was designed to do. 


Human beings, by contrast, have free will to think, decide, and design things that may have no survival function at all, such as art and literature. As C. S. Lewis said: 

The Naturalists have been engaged in thinking about Nature. They have not attended to the fact that they were thinking. The moment one attends to this it is obvious that one’s own thinking cannot be merely a natural event, and that therefore something other than Nature exists.  

We can decide to do something, or decide not to do it. We can choose between limitless options. Thoughts are what make human beings unnatural. Thoughts are what make us exceptional.