With New MCAS-PARCC Hybrid Half of Massachusetts Students Fall Short

Common Core is so rigorous! It’s going to help prepare students for college and career! We’ll see higher rates of student achievement!

Well, perhaps not so much. Massachusetts used to be the crème de la crème of K-12 education. Now according to the new MCAS-PARCC hybrid assessment, half of their students fall short.

The Boston Herald reported this week:

The results may come as a surprise to many students who passed or scored “proficient” on the previous test, but this year’s test was tougher and raised the bar for expectations.

This year, state education officials said, should be considered a baseline year, and they expect scores to change over time as schools adjust.

In other words, the results do not mean individual students declined, but that the new test measures education in a different, more accurate way, officials said.

“It doesn’t mean that your child has changed from last year to this year,” said Jacqueline Reis, DESE spokeswoman. “It’s that we’re sending a clearer signal.”

“This isn’t because we have failing schools,” said Acting Commissioner Jeff Wulfson. “This is because we want to make sure our kids are ready for college work. When we have 80 percent of kids scoring proficient and then needing college-remedial work, we’re not doing them any favors.”

The story, of course, reports that the assessment is “tougher,” but does not explain how. How was the bar raised? How are the results more accurate? I think the state’s Common Core cheerleaders should answer why only half of the state’s students are proficient after several years with the new standards.

Computer Science Makes The Case For Less Computer Use in Schools

Photo credit: Bartmoni (CC-By-SA 3.0)

Keywords for today: Working memory, long-term memory, subroutines, chunking, structured programming, math

When you mention common core, most people who are opposed to it immediately mention the “crazy math” instruction created by the standards. They roll their eyes and say, “I never had to learn math that way and I turned out just fine.” Then they point to the statistic that shows American students aren’t able to do basic math when they graduate high school. Nationally, 25% of our high school graduates must take remedial coursework in math in college. Here in Missouri, that number is just upwards of 30%.  From there the typical response is to blame the teachers and call for the elimination of tenure of these terrible teachers who can’t seem to teach math. Those who like the standards often respond with the pablum that kids won’t need to do math in the real world anyway. They can just use their smartphone or a computer. However, cognitive science is on the side of the old schoolers and, interestingly, so it seems is computer science.

The fundamental differences in the approach to teaching mathematics (in elementary and middle school) comes down to; do you focus on teaching processes and creative problem solving or, do you focus on mastery of basic skills and rote memorization. These camps squared off against each other in California in the 90’s and 00’s in what is now known as the Math Wars. Real world experience was not on the side of those in the first camp. California moved towards the 1989 recommendations of the National Council of Teachers of Mathematics which de-emphasized memorization of math facts (among other things covered later in this post). The result was that in four years (by 1996) their fourth graders ranked 42nd out of 44 states in math on the NAEP. Five years after adoption, the percentage of students in their state university system requiring math remediation rose from 23% to 54%. Is there science behind these outcomes? It turns out there is.

A meta-study by Harmtan and Nelson (Automaticity in Computation and Student Success in Introductory Physical Science Courses) looks at cognitive research, the misguided guidelines produced by the National Council of Teachers of Mathematics and subsequent changes to state math standards that all seem to be working against each other to produce children who are so ill-prepared in math that they simply cannot complete a STEM degree.

Cognitive Science

First brief lesson in cognitive science or, how the brain works. From their paper:

“The cognitive science model for reasoning is based on the interaction between a long-term memory (LTM), where elements of knowledge are organized, and working memory (WM) where elements are processed.”

Long Term Memory:  “Procedures (sequenced steps for processing) and facts are stored as small elements of knowledge in LTM.” While LTM has the ability to hold thousands of facts, procedures and associations, those elements are stored slowly over time through focused attention, repeated exposure and retrieval practice.

Working Memory: “The brain thinks, plans, and solves problems in working memory.” WM  can recall unlimited well-memorized information from LTM but, can hold only a few small elements (3-5) of novel information (that which is not-well-memorized), for a very brief period of time (~30 seconds).

Automaticity: “The fast, implicit, and automatic retrieval of a fact or a procedure from long-term memory.”

During problem-solving, if the limits in WM space for novel information are reached, the result is a sense of confusion and a likely inability to solve the problem. Therefore,

your ability to solve problems depends nearly entirely on how much knowledge you have “memorized to automaticity” in LTM. Ericsson 1995

The example Hartman/Nelson gives is the calculation 8 x 7. If the answer cannot be recalled automatically from LTM, and instead the child uses a calculator to come up with the answer, the number 56 must be stored in WM so that it can be transferred to where the calculation is being written. That takes one of the limited 3-5 slots our brains have available for novel information in WM.

Three ways around the novel WM constraints are chunking, algorithms, and automaticity. All require thorough memorization. More on this later.

How the “experts” ignored these scientific facts.

The National Council of Teachers of Mathematics (NCTM) is an organization composed primarily of faculty from schools of education and K-12 curriculum specialists and instructors. They are not mathematicians.  In a 1989 detailed position paper, known as the “NCTM Standards,” the council recommended the use of a calculator in all grades.  In the ensuing years, over 45 states adopted standards for K-12 education modeled on the NCTM standards. By 2005, 30 states actually required students as young as first grade to be taught how to use a calculator. Their standards also called for “decreased attention” to be given to:

  • “memorizing rules and algorithms
  • finding exact forms of answers
  • manipulating symbols
  • relying on outside authority (teacher or answer key)
  • rote practice
  • paper and pencil fraction computation”

NCTM’s position paper got states to move students away from placing math facts into LTM in favor of working out real world problems on their own or in small groups, relying very heavily on WM. It should begin to make sense why our students struggle with more complex math in high school and require remedial coursework in college given the laborious, anti-cognitive science way they have been taught math.

In contrast, the National Mathematics Advisory Panel (NMAP), made up of mathematicians, recommended in their 2008 report

“[During calculations,] to obtain the maximal benefits of automaticity in support of complex problem solving, arithmetic facts and fundamental algorithms should be thoroughly mastered, and indeed, over-learned, rather than merely learned to a moderate degree of proficiency.”

Hartman/Nelson’s analysis of American’s scores on math assessments shows that we are making progress with conceptualized math, but we are tanking on actual computational skills. The reliance on calculators has deprived students of the ability to recognize when the answers provided by the calculators are unreasonable and input error should be considered. A nurse can conceptually understand how important it is to get the right dose of medication for a patient. However, if she cannot do the simple calculation of dose to weight and instead relies solely on a machine programmed to provide the dose per kg instead of per lbs, into which pounds have been entered, she won’t recognize the problem with the machine’s recommended dose.

Calculators to Computers

In 1989 students had reasonable access to calculators. Today they have reasonable access to computers and once again we have “experts” calling for more use of computers in the classroom to be in line with “21st-century skills.” The existence of computers, which can calculate complex math problems often faster than humans, begs the question; do humans even need to be able to solve those problems themselves?

Cognitive experts respond that, even in the age of computers, “automaticity in support of complex problem solving is crucial for students because complex problems have simple problems embedded in them.” (Willingham 2009b) The example above of the nurse provides anecdotal evidence that we still need humans to understand the math.

The interplay between data sets and working memory is true not only for humans but also for the computers that are solving these problems so quickly. They are able to do that because they pull answers from their own stored data sets. They use common subroutines linked together, in a process called structured programming, to perform complex calculations and complex processes.

Structured programming takes advantage of the solutions offered by cognitive science: chunking, algorithms, and automaticity. “Chunks” are elements that have been memorized as a group (think keywords or tags). Algorithms are self-contained step-by-step set of operations to be performed. Automaticity is rapid retrieval from a known data set.

 There is excited talk about teaching very young children computer programming. It would be impractical to teach them any specific programming language because the language will likely be obsolete by the time they graduate. But you can teach them the structured programming concepts of taking common subroutines and stringing them together to complete more complex tasks.

For example, say you want to move a virtual car through a simple maze.

Older coding would create a string of code that says, for example,

Advance 1 space forward
Turn left
Advance 1 space
Turn right
Advance 1 space
Turn left
Advance 1 space
Turn right
Advance 1 space
Turn left
etc.

In this case, you would have to know exactly what to do in each square in advance and write out all the directions to get the car out of the maze. Make a mistake like telling it to turn left rather than right, or telling it to advance two spaces instead of one and your car (program) gets stuck. This is a relatively small simple maze. Larger mazes would require much longer code.  To techies this is known as spaghetti code. It is long and prone to errors which are hard to find.

Structured programming relies on common tasks that are repeated and would look conceptually more like this.

1  Directive 1: Purple square stops advance
2  Sub1: Advance one space
3  Sub2: turn right 90 degrees
4  Sub 3: turn left 90 degrees, turn left 90 degrees
5 Test Directive 1
6  If false, sub1
7  Repeat until Directive 1=true
8  If true, sub 2, test Directive 1
9  If false, sub1
10   If true, sub 3, sub1
11  Resume line 5

Now, no matter how large your maze is, the program is only this long. The program combines known tasks and conditions to make minute but rapid repeated decisions to complete a more complex process. Programmers know that value of small discreet knowledge that is used frequently to speed up processing. Not only does this make programming faster, it also makes it easier to find where the code has gone wrong. This is very similar to how the human brain works and makes the case that children should be embedding those blocks of knowledge in their long-term memory for future use in more complex operations.

Fortunately, the young mind is primed to do this type of LTM storage. By the time they get to school they have been doing this with language for 4-6 years. It is relatively easy to get them to work on math facts by repeated exposure and retrieval practice. This will provide them with the necessary bank of knowledge to pull from LTM to use in more advanced math later. This will speed up their processing and make finding errors in their processing much quicker as well.

The significance of ignoring what cognitive science tells us and eschewing things like rote memory, as NCTM did, is captured in the abstract from Hartman/Nelson’s paper. The affect on a student’s ability to obtain a degree in the much lauded STEM field is impacted by their ability to do computational math.

Between 1984 and 2011, the percentage of US bachelor’s degrees awarded in physics declined by 25%, in chemistry declined by 33%, and overall in physical sciences and engineering fell 40%. Data suggest that these declines are correlated to a K-12 de-emphasis in most states of practicing computation skills in mathematics. Analysis of K-12 “state standards” put into place between 1990 and 2010 find that most states directed teachers to de-emphasize both memorization and student practice in computational problem solving. Available state test score data show a significant decline in student computation skills. In recent international testing, scores for US 16-24 year olds in numeracy finished last among 22 tested nations in the OECD.Recent studies in cognitive science have found that so solve well-structured problems in the sciences, students must first memorize fundamental facts and procedures in mathematics and science until they can be recalled “with automaticity,” then practice applying those skills in a variety of distinctive contexts. Actions are suggested to improve US STEM graduation rates by aligning US math and science curricula with the recommendations of cognitive science.

What we are teaching, or not teaching, today will take almost 20 years to show up in the workforce so these are not just academic debates among elementary and middle school teachers. There is some urgency to get it right.

To review. Reasoning is the interaction between LTM and WM. Without a bank of memorized facts ready for use in WM, our ability to reason is severely impaired. If basic math facts (+-x÷) are not available from LTM, students will struggle with more complex math as WM becomes overloaded. Without more complex math they will not be able to complete a STEM degree. Computer science is a STEM subject. Using computers early on in education reduces banking of information into LTM. Computer scientists rely on smaller more manageable known datapacks to do complex processing, much like the interchange of LTM and WM. Thus computer science makes the case for not using computers in early grades to do computational math.

Cross-posted from Missouri Education Watchdog.

Beware of the Personalized Learning Propaganda

Photo Credit: Lucélia Ribeiro (CC-By-SA 2.0)

The lust for money and power drives a lot of bad public policy. This truism certainly applies to education, where technology corporations have joined Brave New Worlders in seeking to implement technology-driven “personalized learning” (PL). What these forces won’t admit (or at least not in so many words) is that the goal of adopting education by machine is to (1) replace genuine education with training for workforce skills, and (2) eventually reshape individual personalities, attitudes, and mindsets to better fit the government-approved mold.

But even if the goal of PL really were to bolster academic content knowledge by improving instruction, modern cognitive science suggests this can’t happen. A recent article by Benjamin Riley explains why. Writing for an issue of Educational Leadership devoted to “Getting Personalization Right,” Riley begins with this wry observation: “with the exception of the article you’ve just started reading, nearly everything you read in this magazine about personalized learning is probably wrong.”

Riley defines PL as a system in which the student has greater control over the content and the pace at which he learns, with some use of technology to customize learning. He begins by reviewing the research about PL’s effectiveness. Asking what evidence there is that PL works, he answers his own question: “Virtually none.” The U.S. Department of Education has funded PL in 21 school districts to the tune of half a billion dollars, but two research studies have shown no significant effect on student outcomes. Riley found only one study, by the Rand Corporation, showing any positive effect on student learning (and that in elementary but not secondary grades), but he notes that even that study’s lead author cautioned against “buy[ing] into the advocacy around how great [PL] is.”

Although Riley admits he spent years advocating for PL, he now realizes that “we are collectively fooling ourselves on the idea.” Why does PL fail at actually improving learning? “I believe,” he says, “it contradicts another well-established evidence base related to education: the science of learning.”

As Riley explains, cognitive science teaches that, in any area of study, learning depends on committing certain facts to long-term memory. Once that occurs, the student’s brain is freed to use what’s called “working memory” – actively thinking about something – to solve problems and otherwise build on the knowledge that’s now embedded in the brain. In contrast to our limited working memories, Riley says, “long-term memory refers to facts that you have memorized, and no longer need to consciously think about to access.” An example would be multiplication tables – you needn’t stop to calculate 8 x 5, because you’ve memorized the answer. “The expansion of long-term memory gives students more space for active thinking.”

How does PL conflict with this scientific reality? Because students who are controlling the content of their learning, usually by finding information on the Internet or clicking through an educational-software program, are highly unlikely to commit that information to long-term memory. They scan it, they click it, they’re on to the next task. Certainly there are exceptional students who will delve deeply enough to implant the information in their brains, but the vast majority of students simply won’t, unless they’re made to.

Which brings us to the second of Riley’s characteristics of PL – a student’s control over the pace of his learning. PL conflicts with this as well. As Riley says, “effortful thinking – making use of short-term memory – is mentally fatiguing. . . . [T]he majority of students need the equivalent of a trainer in the gym to help them keep on pace to learn. We call these trainers teachers.” But of course, teachers are exactly the link in the education chain that PL advocates hope to eliminate, or at least minimize by reducing them to data-collectors.

In her valuable book Seven Myths About Education, Daisy Christodoulou makes the same point about the necessity of committing knowledge to long-term memory. Everything the progressive educators say they want – including education as “problem-solving” – depends on deeply embedded knowledge: “When we try to solve any problem,” Christodoulou writes, “we draw on all the knowledge that we have committed to long-term memory. The more knowledge we have, the more types of problems we are able to solve.” And the stubborn fact is that PL makes it much harder for students to increase their long-term knowledge.

In short, cognitive science confirms what all veteran teachers know: True learning requires structure, repetition, and work, not just ability to mimic something that pops up once on a screen before moving on to the next. Beware the PL propaganda.

California Bracing for Disappointment?

California is set to release their Smarter Balanced scores for this year. It’s the third year the state has administered the Common Core-aligned assessments. The spin machine seems to be in full effect before the release evidenced by an article in EdSource cautioning reading to use California’s test scores “with caution.”

State Board of Education member Sue Burr, a close advisor to Gov. Brown and who was involved in drawing up the new accountability system, noted at the board’s recent meeting that under California’s old system “test results were the be-all and end-all” as to assessing whether the state’s students were making sufficient progress.

She said that in the future it may make more sense for California to release results on performance on all measures simultaneously, not just test scores. That way, Californians would have a better idea of “what the whole picture looks like,” instead of making “a big whoop-de-do about test results.”

It is also the case that in a state with close to 1,000 school districts and 10,000 schools, test scores are a blunt instrument in telling us what is going on. The statewide averages are just that— averages. They mask how well individual schools and students are doing, and similarly, how poorly others are doing.

“There is a risk that people will pay too much attention to the magic numbers, because they are easy to understand and to compare across the system” said John Affeldt, managing attorney for Public Advocates, a public interest law firm that has been heavily involved in promoting better education outcomes in California’s schools. “It is incumbent on policy makers and educators to communicate that California education is about a lot more than the numbers of students who score at a certain level on a test.”

There is also a danger that parents, advocates and others who are understandably impatient to see rapid improvements will be tempted to declare the current reforms a failure if the tests scores don’t improve over last year’s scores.

It is like they’re expecting people to be disappointed with the test scores. Spin…spin…spin…

AP Confirms What We Already Knew

The Associated Press last week confirmed what we already knew most of the Common Core “repeals” states have done have been a complete sham.

An excerpt:

Of the states that opted in after the standards were introduced in 2010 — 45 plus the District of Columbia — only eight have moved to repeal the standards, largely due to political pressure from those who saw Common Core as infringing on local control, according to Abt, a research and consulting firm. In Oklahoma, Gov. Mary Fallin signed a bill to repeal the standards in 2014 less than six months after defending them in a speech. She said Common Core had become too divisive.

Twenty-one other states have made or are making revisions — mostly minor ones — to the guidelines. Illinois kept the wording while changing the name. In April, North Dakota approved new guidelines “written by North Dakotans, for North Dakotans,” but some educators said they were quite similar to Common Core. Earlier this month, New York moved to revise the standards after parents protested new tests aligned to Common Core, but much of the structure has been kept.

“The core of the Common Core remains in almost every state that adopted them,” said Mike Petrilli, president of the conservative Thomas B. Fordham Institute.

Then they note Common Core has not lived up to its promise.

Measuring the direct impact of Common Core is difficult. A study last year by the Brown Center on Education Policy with the Brookings Institution showed that adopters of Common Core initially outperformed their peers, but those effects faded. It’s also unclear if the gains were caused specifically by Common Core.

“I think it was much ado about nothing,” said Tom Loveless, the author of the report. “It has some good elements, some bad elements. Common Core nets out to be a non-event in terms of raising student achievement.”

Petrilli, who advocated for Common Core, is convinced the standards resulted in more rigor and better tests.

“We are now following a much better recipe for student achievement, but the cake is still being baked, so we don’t yet know if it’s going to taste as good as we hope,” Petrilli said.

In my opinion, Tom Loveless has more credibility than Michael Petrilli on this subject. Common Core has certainly been “a non-event in terms of raising student achievement” and an expensive one at that.

Bill Gates has said that we won’t know for at least ten years whether or not the reforms he has championed will work. That’s a long time to bake a cake and too much is at stake to get it wrong.

The Forgotten Student

In an 1883 essay titled “The Forgotten Man,” Yale Professor William Graham Sumner asked the reader to imagine four men.  Two of them, A and B, observe a third man, X, who is in need.  They decide to use the machinery of government bureaucracy to transfer wealth to X.  But the man who pays for this wealth transfer is neither A nor B, but a fourth man, C, whom we today might say is among the middle or lower middle class.  In Sumner’s original construct, C was the “forgotten man.”

Franklin Delano Roosevelt used this same essay in the 1930s to justify his New Deal program.  However, FDR revised the concept to exclude C from the conversation and make X the “forgotten man.”  This change in the metaphor relieved X of any responsibility for his circumstances.  According to historian Amity Shlaes in her book “The Forgotten Man: A New History of the Great Depression,” this was the beginning of the “modern entitlement challenge” as Roosevelt figuratively re-wrote the definition of the word “liberal,” changing its application from individual liberty and individual rights to that of group identity and rights.

How Education Policy Creates “The Forgotten Student”

A century after the original “Forgotten Man” essay was written, Charles Murray’s book, Losing Ground: American Social Policy 1950-1980, explained how modern social policy had expanded the concept beyond income transfers.  He writes the following in a section titled Robbing Peter to Pay Paul:  Transfers from Poor to Poor:  “But in a surprising number of instances the transfers are mandated by the better-off, while the price must be paid by donors who are just as poor as the recipient.”

Murray provides a thought experiment wherein two poor inner-city students are alternatively benefited and harmed by the federal government’s education policies.  He posits a teacher in an inner-city school with students facing identical ethno-socioeconomic circumstances, where one behaves in a “mischievous” way, and another does not.  Out of a desire to protect the “mischievous” student’s civil rights, the education system prevents the teacher from disciplining him. As a result, Murray writes:

I find that the quality of education obtained by the good student deteriorated badly, both because the teacher had less time and energy for teaching, and because the classroom environment was no longer suitable for studying.  One poor and disadvantaged student has been compelled (he had no choice in the matter) to give up part of his education so that the other student could stay in the classroom.

How DOE Regulations Harm the Forgotten Student

Recently, the sort of action Murray warned about has been brought to light by Wall Street Journal columnist Jason L. Riley.  In his September 12th article “Another Obama Policy Betsy DeVos Should Throw Out,” Riley describes how the Education Department released a 2012 study showing that black students were three times as likely to be suspended and expelled as their white counterparts.  In 2014, the DOE issued a “guidance” letter: warning school districts to address this racial imbalance.  The letter said that the district could face a federal civil-rights investigation “if a policy is neutral on its face – meaning that the policy itself does not mention race – and is administered in an evenhanded manner but has a disparate impact, i.e., a disproportionate and unjustified effect on students of a particular race.”  Riley states:

Fending off charges of discrimination can be expensive and embarrassing, so spooked school districts chose instead to discipline fewer students in deference to Washington. The Obama guidance didn’t start the trend—suspensions were down nearly 20% between 2011 and 2014—but the letter almost certainly hastened it. The effects are being felt in schools across the country, leaving black and Hispanic students, the policy’s theoretical beneficiaries, worse off.

Reversing The Unraveling

Since the creation of the US Department of Education, the debate over education policy has been fought between those who want some sort of national curriculum and federal control on one side and those who advocate for parental rights and local control over the teaching of subject matter and moral values.  In the meantime, America has ignored the Forgotten Student and succumbed to what Alan Bloom called “The Closing of the American Mind” to such ideals as right, wrong, good, and evil.  This process has led inexorably to what I will call The Unraveling of We the People.

The Progressive Movement has advocated this “great closing” as a way to deliberately move away from the inculcation of Christian values in the minds of young students, and directly mold the character of our people.  Reversing this trend will not be easy.  It will take a coalition of tea party activists, conservative Christian academics, and researchers skilled in untangling the web of “educrat” regulations filled with doublespeak to reverse course.

I can think of no better place to start this conversation than with the readers of this blog.

South Dakota’s Public Colleges Promise Admission for Good Smarter Balanced Scores

South Dakota’s public university system has promised that students earning a level 3 or 4 on their Smarter Balanced Assessment will be guaranteed a general acceptance to one of the state’s six public universities or four technical schools.

The Sioux Falls Argus Leader reports:

High school seniors in South Dakota may receive an acceptance letter for college before they ever apply.

In an effort to boost enrollment, South Dakota public universities will be sending out what they call “proactive admissions” letters to qualifying students later this month.

These letters will go to students who score well in English and math on either the state standardized tests or on the ACT. Students who receive letters will be guaranteed acceptance into the state’s six public universities and four tech schools.

“This now is the first criteria that our institutions will use to determine a student’s admission,” said Paul Turman, vice president for academic affairs for the South Dakota Board of Regents.

Students who earn an 18 on their ACT will also receive a general acceptance letter.

This news provides more motivation for teachers to prep the state’s high school juniors for the Smarter Balanced Assessment. Oh goody.

Oregon’s Smarter Balanced Assessment Scores Drop

Oregon schools showed a drop in proficiency in math and ELA as last school year’s Smarter Balanced Assessment scores were released.

The World reported:

Local school districts saw a slight decrease in overall scores compared to the year before. In the new scores, most didn’t exceed or come close to 50 percent proficiency, according to  results from Oregon’s Smarter Balanced assessments for the 2016-2017 school year.

“We have an important opportunity, through the Oregon Plan under the federal Every Student Succeeds Act, to focus on providing a culturally relevant, well-rounded education for every student,” State Deputy Superintendent of Public Instruction Salam Noor said in a press release from the Oregon Department of Education.

The Smarter Balanced Assessment Consortium (SBAC) is standardized state testing that helps create common core state standards. It tests third through eighth grade levels and high school on math, science, and the English Language Arts.

“We are confident that as we work with school and district leaders to implement the Oregon Plan, we will see more students attending school regularly, more students graduating and more scoring in the proficient category on these assessments,” Noor said.

Across Oregon, SBAC scores showed fewer students being proficient in English Language Arts (ELA) and math.

Surprise… Surprise… Surprise… Oregon adopted the Common Core State Standards in 2010 and fully implemented them by the 2014-2015 school year. A drop in proficiency when they switched to Smarter Balanced from the previous OAKS assessment, but the decline is reflected in concurrent years taking Smarter Balanced.

The standard response would be to give the standards and the assessment more time to make a difference in student achievement. The problem is that we are not seeing that student achievement increase anywhere else as a result of Common Core, at least not when you look at say Kentucky’s ACT scores. Kentucky was the first state to implement Common Core. Oregon’s ACT scores also look relatively stagnant over the last five years as well.

However, give it time, the magic silver bullet of education reform is soon to make its impact I am sure.

Next Generation Science Standards Are Bad Even Without Politicization

High School Student Holding Molecular Model

I received an email today about my article on the Next Generation Science Standards yesterday (The Next Generation Science Standards Are Already Politicized) that I wanted to address.

The reader wrote:

I identify as an independent voter and was happy to see, as I battle Common Core, that my friends were of a diverse political background and that almost everybody who learned about it, got on board  with our battle. It not was a Republican or Democratic issue, truly bipartisan.  After reading the TAE article The Next Generation Science Standards Are Already PoliticizedI worry that it is becoming so, at least with the science standards. I think the Bible should stay at the church and science should be presented with the latest info that is accepted by most scientists.  Science changes, I mean is Pluto a planet or not, they keep changing their mind! Trump appointed a man who is the ‘chief Scientist’ or whatever his title is, and he’s not even a scientist, that is alarming to me.

There are so many issues we can agree on about how much Common Core, well, sucks, I’m hoping we can stick to those issues so we don’t lose our Democratic warriors. I appreciate and love that you said “Give me a break. I hope Governor Martinez would reject even the “sanitized” version as Next Generation Science Standards are awful.”  But I fear the whole article is highlighting the politicizing, which is what ‘they’ want to do to help keep us divided. I hope everyone reads to the end and really gets your message.

I understand this point of view. I want to make abundantly clear that my opinion is mine alone, and not necessarily the view of everyone connected to TAE. I primarily addressed that topic because that is what the article I referenced focused on. I am not suggesting that schools should teach creation, so no I am not pushing for the Bible to be taught. (I did not even advocate a position yesterday other than to say NGSS was politicized.) At the very least, I believe, educators should keep the discussion about evolution at the high school level. I also think honest academic discussion should include differing opinions on the subject. Many accept micro-evolution but see inherent problems with macro evolution. Why not discuss those? Evolution is not the only scientific theory on the block, why not mention those? At least kids should come out of a biology class with an understanding that the origin of life question is far from settled and evolution is one explanation. Unfortunately, that is not the case, and instead of education, many students get indoctrination. Is that what we want?

At the very least, I believe, educators should keep the discussion about evolution at the high school level. I also think honest academic discussion should include differing opinions on the subject. Many accept micro-evolution but see inherent problems with macro evolution. Why not discuss those? Evolution is not the only scientific theory on the block, why not mention those? At least kids should come out of a biology class with an understanding that the origin of life question is far from settled and evolution is one explanation. Unfortunately, that is not the case, and instead of education, many students get indoctrination. Is that what we want?

That is my personal opinion, and I understand not every reader, including ones who beleive the Next Generation Science Standards and Common Core are subpar, agree with me.

Like I said yesterday, I do hope Governor Susana Martinez rejects a “sanitized” Next Generation Science Standards as the standards do have problems with them.

The Fordham Institute, who endorsed the Common Core Math and ELA Standards, in their review of the Next Generation Science Standards second draft made public briefly in January of this year.  They said “large problems still abound” and those include:

  • In an apparent effort to draft fewer and clearer standards to guide K–12 science curriculum and instruction, the drafters continue to omit quite a lot of essential content. The pages that follow supply many examples. Among the most egregious omissions are most of chemistry; thermodynamics; electrical circuits; physiology; minerals and rocks; the layered Earth; the essentials of biological chemistry and biochemical genetics; and at least the descriptive elements of developmental biology.
  • As in version 1.0, some content that is never explicitly stated with regards to earlier grades seems to be taken for granted when referring to later grades—where, we fear, it won’t actually be found if the earlier-grade teachers do not see it made explicit.
  • Real science invariably blends content knowledge with core ideas, “crosscutting concepts,” and various practices, activities, or applications. The NGSS erroneously claims that presenting science as such an amalgam is a major innovation (“conceptual shift”), which it is not. Much more problematic, the NGSS has imposed so rigid a format on its new standards that the recommended “practices” dominate them, and basic science knowledge—which should be the ultimate goal of science education—becomes secondary. Such a forced approach also causes the language of these standards to become distractingly stereotyped and their interpretation a burden.
  • As noted above (and praised), the drafters made a commendable effort to integrate “engineering practices” into the science rather than treating engineering as a separate discipline. Still—once again—their insistence on finding such practices in connection with so many standards sometimes leads to inappropriate or banal exercises—and blurs the real meaning of “engineering.”
  • The effort to insist on “assessment boundaries” in connection with every standard often leads to a “dumbing down” of what might actually be learned about a topic, seemingly in the interest of “one-size-fits-all” science that won’t be too challenging for students. This is a mistake in at least two ways. First, it potentially limits how far and how deep advanced students (and their teachers) might go. (The vague assertion that this can be dealt with via “advanced” high school courses helps almost not at all.) Second, it usurps the prerogative of curriculum builders and those constructing (and determining proficiency levels on) assessments to make these decisions for themselves. It is one thing to set forth what must be covered in school; it is quite another to try to put limits on how much more might be covered—and to suggest that not going farther is perfectly okay, even for pupils who could and would. What’s more, these “boundaries” are often used to strip science of critical mathematics content.
  • A number of key terms (e.g., “model” and “design”) are ill-defined or inconsistently used.
  • Even as the amplitude of new appendices adds welcome explanation and clarification of what is and isn’t present and why, it also produces a structure for NGSS that most users, especially classroom teachers, will find complex and unwieldy. Even the attempts to help users understand and apply these standards (as in the four-page PDF document titled “How to read the NGSS standards”) are complicated and confusing. Moreover, the various appendices are clearly aimed at different audiences without ever saying so. Will a fifth-grade teacher actually make her way to Appendix K to obtain additional (and valuable) information about science-math alignment and some pedagogically useful examples? Will the final version of NGSS omit some of the intervening appendices that have more to do with the philosophical, political, and epistemological leanings of the project and its leaders than with anything of immediate value to real schools?
  • Although the “alignment” of NGSS math with Common Core math is improved, there also seems to have been a conscious effort by NGSS drafters not to expect much science to be taught or learned of the sort that depends on math to be done properly. This weakens the science and leads, once again, to a worrisome dumbing down, particularly in high school physics—which, as the reviewers note, “is inherently mathematical.” It must also be noted that Appendix K, valuable as it is in grades K–5, is essentially AWOL from the middle and high school grades, where it is most needed. Indeed, our math reviewers found “no guidance about the specific mathematics to be used for individual science standards at the high school level. And only occasional guidance at the middle school level.”

One science teacher also saw the Next Generation Science Standards as “backward engineered.” He wrote, “The ‘Next Generation Science Standards’ have set out to backward engineer the whole science curriculum into a coherent, self-validating tool. The goal all along was an instrument to market both teaching and assessment products to a captive education system, not to provide a framework for good teaching of the sciences. In addition to all the historical evidence for this interpretation, we can now examine the document itself.”

He added, “In fact, we can readily see that their standards are made out of picked bones. These standards actually don’t span anything much and connect nothing but assessment boundaries. In this case, less isn’t more. We would be forced to devote all the formative, developmental years to consumption of standards-based learning products and assessments, in absurdist preparation for future standards-based product lines.”

The standards (Fordham mentions this as well) do not include chemistry as a separate subject but instead distribute it throughout other subjects. In so doing, the standards drop essential science content, writes former chemistry professor and science editor Harry Keller.

The standards also fail to require any chemistry labs, which is odd given their focus on experiential learning and entirely distort the point of science, which is learning from tested experience. Its format pushes a teaching method similar to that of the failed 1940s progressive science that focused not on learning but the “social, personal, and vocational needs of the student,” Keller writes.

So hopefully the evolution and man-made climate change proponents among us can also recognize that these are bad standards.

Submit Federal Education Privacy Comments by September 20 at 11:59p!

This is an important action item so I wanted to be sure people saw this. I highlighted Education Liberty Watch’s action alert on TAE social media, but I wanted to include it on our website for people dropping by our website who may have missed it.

Here is their whole post below. Be sure to leave a comment today (9/19) or tomorrow (9/20). Comment close on 9/20p at 11:59p (EDT). Also, if you use a MacBook, just know the form in Safari is wonky. I ended up having to use my iPhone. You may want to also try Google Chrome or Mozilla Firefox if you use a MacBook.


We now have an opportunity to protect the privacy and minds of our children. We can submit comments in support of President Trump’s effort to scale back regulations, particularly in the U.S. Department of Education on several of these topics. All comments must be submitted by 11:59 PM on Wednesday, September 20th at this website:  https://www.regulations.gov/comment?D=ED-2017-OS-0074-0001.
 
Here are two big areas of “fed ed” regulations. There are more detailed comments below that you may add if you want to, but all you really need to do is to ask for 1) Withdrawal of all the FERPA regulatory changes made to 34 CFR, Part 99 that went into effect in January of 2012 and 2) Enforcement of the statutory prohibition on assessing “attitudes and beliefs” of a student or their family in ESSA’s state-mandated assessments or in the NAEP.
 
Thank you for what you can do and for your efforts to protect the hearts and minds of children!
 
Detailed Additional Information for Potential Comments
1) FERPA – Withdraw the regulatory changes to FERPA that went into effect in 2012. This would prevent USED, state agencies, and schools from disclosing personally identifiable information (PII) to literally anyone in the world, without parental consent or even notification, if the disclosing entity uses the correct language to justify the disclosure. Ask USED to:

Restore the longstanding, pre-2012 definitions and interpretations of an “authorized representative,” “education program,” and other terms.

Stop a state department of education or other agency that receives PII for other purposes from redisclosing that data to other entities, such as researchers, without parental consent.

Restore the audit exception so that the requirement (previously contained in 34 CFR §99.35(a)(2)) that in order for a state or local educational authority to conduct an audit, evaluation, or compliance or enforcement activity, it must demonstrate authority to do so under some federal, state, or local grant of authority.

2) Enforce the statutes prohibiting the assessment of “attitudes and beliefs” in surveys associated with ESSA’s mandated state tests and in the NAEP.
Such surveys (which are being administered without parental consent) violate one or both of the following:

ESSA [20 U.S.C. §6311(b)(2)(B)(iii)] requiring statewide assessments to “objectively measure academic achievement, knowledge, and skills, and be tests that do not evaluate or assess personal or family beliefs and attitudes or disclose personally identifiable information.” There is identical language in the Education Sciences Reform Act that covers the NAEP [20 USC §9622(b)(5)(A)]

PPRA [20 U.S.C. §1232h(b)(1-8)], which requires parental review and consent for surveys in federally funded education programs that ask about 8 sensitive items, including mental health, illegal or anti-social behavior; or sexual behavior or attitudes.