Mandated “Standardized” Tests or Mandated “Performance” Tests?

Fewer and fewer colleges require SAT scores for admission and more and more parents and others are calling for the reduction or elimination of “standardized” tests.  Interestingly, there is little call for “no mandated K-12 tests” at all.  One might expect that call given the complaints against Common Core-aligned tests and the number of misleading references to what Finland has done.

According to many education writers in this country, there are no tests in Finnish schools, at least no “mandated standardized tests.”  That phrase was carefully hammered out by Smithsonian Magazine to exclude the many no- or low-stakes “norm-referenced” tests (like the Iowa Test of Basic Skills, or ITBS) that have been given for decades across this country especially in the elementary grades to help school administrators to understand where their students’ achievement fell under a “normal curve” of distributing test scores.

Yet, a prominent Finnish educator tells us that Finnish teachers regularly test their upper-grade students. As Finnish educator, Pasi Sahlberg, noted (p. 25), teachers assess student achievement in the upper secondary school at the end of each six to seven-week period, or five or six times per subject per school year. There are lots of tests in Finnish schools, it seems, but mainly teacher-made tests (not state-wide tests) of what they have taught.  There are also “matriculation” tests at the end of high school (as the Smithsonian article admits)—for students who want to go to a Finnish university.  They are in fact voluntary; only students who want to go on to university take them.  Indeed, there are lots of tests for Finnish students, just not where American students are heavily tested (in the elementary and middle grades) and not constructed by a testing company. 

Why should Americans now be even more interested in the topic of testing than ever before?  Mainly because there seems to be a groundswell developing for “performance” tests in place of “standardized” tests.  And they are called “assessments” perhaps to make parents and teachers think they are not those dreaded tests mandated by state boards of education for grades 3-8 and beyond as part of the Every Student Succeeds Act (ESSA). Who wouldn’t want a test that “accurately measures one or more specific course standards”?  And is also “complex, authentic, process and/or product-oriented, and open-ended.”  Edutopia’s writer, Patricia Hilliard, doesn’t tell us in her 2015 blog “Performance-Based Assessment: Reviewing the Basics” whether it also brushes our hair and shines our shoes at the same time.

It’s as if our problem was simply the type of test that states have been giving, not what is tested nor the cost or amount of time teachers and students spend on them.  It doesn’t take much browsing on-line to discover that two states have already found out there were deep problems with those tests, too: Vermont and Kentucky.  

An old government publication (1993) warned readers about some of the problems with portfolios: ”Users need to pay close attention to technical and equity issues to ensure that the assessments are fair to all students.” It turns out that portfolios are not good for high stakes assessment—for a range of important reasons. In a nutshell, they are costly, time-consuming, and unreliable.   Quoting one of the researchers/evaluators in the Vermont initiative, it indicates: “The Vermont experience demonstrates the need to set realistic expectations for the short-term success of performance-assessment programs and to acknowledge the large costs of these programs.” Koretz et al state elsewhere in their own blog that the researchers “found the reliability of the scoring by teachers to be very low in both subjects… Disagreement among scorers alone accounts for much of the variance in scores and therefore invalidates any comparisons of scores.” 

Koretz and his colleagues emphasized the lack of quality data in another government publication. And as noted in a 2018 blog by Daisy Christodoulou, a former English teacher in several London high schools, validity and reliability are the two central qualities needed in a test. 

We learned even more from a book chapter by education professor George K. Cunningham on the “failed accountability system” in Kentucky. One of Cunningham’s most astute observations is the following:

Historically, the purpose of instruction in this country has been increasing student academic achievement. This is not the purpose of progressive education, which prefers to be judged by standards other than student academic performance. The Kentucky reform presents a paradox, a system structured to require increasing levels of academic performance while supporting a set of instructional methods that are hostile to the idea of increased academic performance (pp. 264-65).

That is still the dilemma today—skills-oriented standards assessed by “standardized” tests that require, for the sake of a reliable assessment, some multiple-choice questions.  

Cunningham also warned, in the conclusion to his long chapter on Kentucky, about using performance assessments for large-scale assessment (p. 288).  “The Performance Events were expensive and presented many logistical headaches.”  In addition, he noted:

The biggest problem with using performance assessments in a standards-based accountability system, other than poor reliability, is the impossibility of equating forms longitudinally from year to year or horizontally with other forms of assessment. In Kentucky, because of the amount of time required, each student participated in only one performance assessment task. As a result, items could never be reused from year to year because of the likelihood that students would remember the tasks and their responses. This made equating almost impossible.  

Further details on the problems of equating Performance Events may be found in a technical review in January 1998 by James Catterall and four others for the Commonwealth of Kentucky Legislative Research Commission.  Also informative is a 1995 analysis of Kentucky’s tests by Ronald Hambleton et al.  It is a scanned document and can be made searchable with Adobe Acrobat Professional.  

A slightly optimistic account of what could be learned from the attempt to use writing and mathematics portfolios for assessment can be found in a recent blog by education analyst Richard Innes at Kentucky’s Bluegrass Institute

For more articles on the costs and benefits of student testing, see the following:

Concluding Remarks:

Changing to highly subjective “performance-based assessments” removes any urgent need for content-based questions. That was why the agreed-upon planning documents for teacher licensure tests in Massachusetts (which were required by the Massachusetts Education Reform Act of 1993) specified more multiple-choice questions on content than essay questions in their format (they all included both) and, for their construction, revision, and approval, required content experts as well as practicing teachers with that license, together with education school faculty who taught methods courses (pedagogy) for that license. With the help of the president of the National Evaluation Systems (NES, the state’s licensure test developer) and others in the company, the state was able to get more content experts involved in the test approval process.   What Pearson, a co-owner of these tests, has done since its purchase of NES is unknown. 

For example, it is known that for the Foundations of Reading (90), a licensure test for most prospective teacher of young children (in programs for elementary, early childhood, and special education teachers), Common Core’s beginning reading standards were added to the test description, as were examples for assessing the state’s added standards to the original NES Practice Test.   It is not known if changes were made to the licensure test itself (used by about 6 other states) or to other Common Core-aligned licensure tests or test preparation materials, e.g., for mathematics.   Even if Common Core’s standards are eliminated (as in Florida in 2019 by a governor’s Executive Order), their influence remains in some of the pre-Common Core licensure tests developed in the Bay State—tests that contributed to academically stronger teachers for the state.

It is time for the Bay State’s own legislature to do some prolonged investigations of the costs and benefits of “performance-based assessments” before agreeing to their possibility in Massachusetts and to arguments that may be made by FairTest or others who are eager to eliminate “standardized” testing.

Indoctrination in the SAT

David Coleman announces the SAT redesign.

Parents have been complaining about a question on the SAT their children took recently.

Two parents reported a question about a speech given by Bernie Sanders that was asked on the SAT. 

The first parent asked on social media: 

1) Why was there an Essay Question on my daughter’s SAT test asking her to explain why Bernie Sanders speech was effective?? 

Regardless of any political beliefs this is underhanded and just wrong.

2) The whole country takes mandatory SAT’s yesterday and my daughter was one of them….she told me that the last question was critiquing a speech that Bernie Sanders made on not privatizing the post offices. His arguments/opinions put out there without any opposing views. 

It’s a good time to remind you that David Coleman, one of the Chief Architects of the Common Core Standards, is now the President of the College Board. Since he was elevated to this position there has been much controversy surrounding the SAT/ACT and Advanced Placement Program. 

Coleman came under fire after the testing organization used the tragedy of the Parkland school shootings to promote the Advanced Placement Program

When Coleman spoke about redesigning the SAT he came under scrutiny when he quickly moved to align the SAT to the Common Core Standards.

The College Board moved to revise its AP U.S. History (APUSH) with an ideologically slanted framework. This moved resulted in calls to break the College Board’s testing monopoly. Politicizing U.S. History was not going to happen without controversy or a fight.

One of the ways to indoctrinate children with biased political views is, through standardized testing.  In New Hampshire, it is state law that the SAT must be used to test children in 11th grade.  This was signed into law after the Smarter Balanced Assessment created a whirlwind of controversy several years ago. As one wise parent pointed out this, when he looked at the question:

Notice how the question is couched. It’s sort of like asking, “Explain why Hillary Clinton isn’t President even though she deserved to win.” It’s an opinion framed as a fact. 

The problem isn’t that they included a speech from a political candidate. The problem is that they presented opinion as fact. It’s called a “mind virus.

When the College Board hired a political operative as their President, that brought with it the possibility of more politicization and indoctrination through the assessments and AP courses. It appears as if that’s where Coleman has taken this organization.

That might be why more and more colleges no longer consider the SAT in their admissions process.  According to fairtest.org “More than 1000 four-year colleges and universities do not use the SAT or ACT to admit substantial numbers of bachelor-degree applicants.”

This kind of political indoctrination does not help public education. Parents need to fight for quality education, not indoctrination.  Illiteracy is nothing to cheer about and the more this becomes acceptable, the more chances we have of dumbing down our public schools. 

Unsolved Problems with Common Core-Aligned Tests

There are many teeth in the Common Core Standards project.  These teeth do not lie in visits by monitors from a department of education (in place of a school’s principal) to each elementary classroom in a school. Soft regulatory teeth lie in the Common Core-aligned textbooks, professional development, and instructional materials, software, and other products teachers are encouraged or required to use.  

The teeth are most prominent in the tests based on Common Core’s standards to determine whether students have learned what the tests claim to assess or can do satisfactorily what the test items expect them to do. According to proponents of accountability, student scores are the major means by which policy makers and school administrators will judge whether teachers have taught to these standards. Common Core’s tests are high-stakes for teachers, less so for students. Only the tests for “college readiness” in grades 10, 11, or 12 will be very high-stakes for students as well as for teachers. Yet, as the Common Core drama unfolds, we don’t know much about the tests aligned to them.

Common Core-aligned tests MUST by law be based on a state’s official standards.  That is why the tests given in the Bay State (aka MCAS 2.0) are aligned to Common Core.  Despite their name, they are based on the Common Core standards for English language arts and mathematics adopted by the state board of education in 2010 and slightly revised by the Department of Elementary and Secondary Education (DESE) in 2016 for the four-year state education plan required by Every Student Succeeds Act (ESSA). (Since the tests are given in the Bay State, they are legitimately Massachusetts tests, except for the fact that they are totally unlike the original MCAS tests. For example, no Open Response or OR test items, which were very useful for assessing content-based writing. On the original MCAS tests, there were four OR test questions on every test given at every grade level.)   The state’s four-year plan was submitted to the U.S. Department of Education in 2017 for review and approval, in exchange for Title I money. Approval by the state legislature and local school committees was not required or obtained for four-year plans that no one in the state debated or voted for. 

Since all states today use Common Core-aligned tests, that means almost all schools (including public charter schools) teach to Common Core’s standards. It is not possible to understand the growing opposition to Common Core’s standards without understanding several key issues now being raised about the tests aligned to them.   

A. Criteria Used for Selection of Passages for Reading Tests

The first questions a responsible parent would ask about Common Core-aligned reading tests are: (1) What is the basis for selecting reading passages?  (2) Who actually selects them? We don’t know the answers to these questions for any Common Core-aligned tests, whether given in the Bay State or elsewhere, regardless of name.    

It would have been reasonable for the original USED-subsidized testing consortia (Partnership for the Assessment of Readiness for College and Career or PARCC, and Smarter Balanced Assessment Consortium or SBAC) to use the criteria that developers of the National Assessment of Educational Progress (NAEP) tests are supposed to use for NAEP reading tests. Why? Primarily because the chart showing the “percentage distributions” of basic types of reading passages on NAEP reading tests (informational or literary) is already in Common Core’s English language arts (ELA) document, and the chart is recommended as a guideline for the school reading curriculum even though these percentages were never intended by NAEP to guide the K-12 curriculum. NAEP documents tell us only that these percentages are for the different kinds of reading passages to be used on NAEP tests.  In fact, NAEP Steering Committee members were told that NAEP test developers deliberately do not assess dramatic literature (plays) on the grounds that test passages would have to be very long and would exceed word limits for test passages.  

Mary Crovo, recently retired as Deputy Executive Director of the National Assessment Governing Board (NAGB) in December 2016 and, prior to 2005, Assistant Director of Test Development at the time I was on the Steering Committee for the development of the 2009 reading assessment standards, is one of the few people who can speak to this issue because of her many years of work with NAEP.  NAEP’s decision to exclude assessment of dramatic literature makes it clear that the percentages of literary and informational passages recommended at different educational levels for a Common Core-based K-12 curriculum were NOT intended by NAEP to shape a K-12 reading and literature curriculum. (Dramatic literature—think Shakespeare—was considered by many as the central genre to be studied in high school English.)

Nor is there any research suggesting that a heavy dose of informational reading in secondary English classes develops reading skills as well as, or better than, the literary essays, biographies, and well-known speeches English teachers have always taught in their courses.  David Coleman, lead writer for Common Core’s ELA standards and now CEO of the College Board, probably didn’t understand this or know that members of NAGB in 2004 had helped to develop criteria for the kind of reading passages to be chosen by NAEP test developers. 

Passage Source: Among other criteria, the NAEP document on item specifications for the 2009 NAEP reading assessments says that reading passages are toreflect our literary heritage by including significant works from varied historical periods.”  USED could easily have insisted on this criterion for the Common Core-aligned reading tests it funded since several of Common Core’s high school standards require the study of this country’s seminal political documents, as well as significant texts or authors in American literary history. But so far, no sample test items for college and career readiness tests can be found addressing this country’s seminal political documents. Released PARCC test items can be located via this website. SBAC provides sample test items here. Apparently, few test developers and educators care what is assessed by Common Core-aligned reading tests.  

Overuse of Informational Snippets: Many sample passages in grade 10 or 11 test items aligned to Common Core’s reading standards cannot assess college readiness because they are snippets from what could be a long curriculum unit in science or history with a heavy discipline-based vocabulary load. Surely, if college readiness is to mean anything at all it should mean the ability to follow the gist of long stretches of prose or poetry. It’s hard to see how college readiness can be determined by test items consisting chiefly of short informational articles drenched in subject-related vocabulary.

The sample test item passages for grade 10 released by PARCC about 2013 (but no longer available, alas) demonstrated the use of whole selections at a high school reading level. A sample literary test item required students to compare “Daedalus and Icarus” by Ovid with a poem by Anne Sexton that was related in content. The sample informational selections for grade 11 included a letter by Abigail Adams to her husband and a letter on July 3, 1776 from John Adams to his wife. While these short, related selections constituted a promising set of selections, we do not know how typical these kinds of selections were or are in PARCC test items.  It is certainly not clear if any Common Core-aligned informational test items will be of an adequate length for judging readiness for, say, reading a chapter in a frequently assigned college science or history textbook.  

Other Test Issues: As of 2019, we still do not know what specific people have vetted test items in either reading or mathematics and how demanding the items are for high school college and career readiness tests (or for the revised SAT or ACT tests now judged by USED as legally usable as high school exit tests). We do not know if college teaching faculty in mathematics, science, engineering, and the humanities have been involved in determining cut-off or pass scores for college readiness. Nor do we know exact costs to the schools of what are called college readiness tests (say, compared to pre-Common Core MCAS in the Bay State) and, of far greater importance, what their scores mean to academic experts in the subjects tested.     

B. Low Expectations for College- and Career-Readiness

We must above all consider what Common Core means by “college readiness.”  Common Core itself claims that by addressing its standards, students will graduate from high school able to succeed in entry-level, credit-bearing academic college courses and in workforce training programs. College readiness thus means that students will not have to take a remedial course in mathematics or English if they seek to attend a non-selective college or a community college. 

In Mathematics: Yet, with respect to the coursework implied by the math standards themselves, college readiness reflects a relatively weak algebra II course, as mathematician James Milgram pointed out.  Both logarithms and the standard algebraic analysis of conic sections are missing, according to his examination of the math standards. With only a few advanced (+) standards in trigonometry filling the void between the algebra II standards and introductory college mathematics, Common Core’s standards apparently cannot help to prepare students for STEM careers, which require extensive high school coursework in trigonometry and/or precalculus.   

In English: We know much less about what college readiness in English language arts means.  Common Core’s ELA standards suggest few specific texts to read, and the range of titles in Appendix B in its ELA document illustrating the quality and “complexity” of what students should read from grade to grade is so broad by the high school years that no particular level of reading difficulty above grade 5 or 6 can be discerned.  A variety of research studies suggest that the reading level of the average American high school graduate is about grade 6.   Moreover, we don’t yet know where the pass score has been set in ELA or reading (or, if it has been set, who set it and what it means to English professors or anyone else).

C. What College Readiness Test Scores Tell Us 

What, then, can college readiness test scores in mathematics and reading tell us?  Since tests based on Common Core’s standards cannot address the mathematics requirements of selective public or private colleges/universities (because major topics in trigonometry and precalculus are not in Common Core’s standards, and state-mandated tests by law cannot address topics that are not in the state’s official standards), scores on Common Core-aligned tests can tell us only how many students may be ready for a non-selective or community college. It is unclear whether most colleges now have any reading requirements; they may rely simply on a score on a presumably college-related test such as a literature or language Advanced Placement (AP) test. Although, now that AP tests are aligned to Common Core’s standards, it is not clear what AP test scores themselves mean.

What will we as a society have gained and lost by the use of Common Core’s “college readiness” tests?  We will likely gain a much larger number of college graduates, assuming that more students will complete a college degree program because they haven’t had to take remedial coursework in their freshman year. But they are unlikely to know any more than they would have known if they had had to take remedial coursework because their for-credit college coursework will likely be adjusted downward to accommodate their lower level of high school achievement. 

Recall that the level of college readiness in Common Core mathematics is, to begin with, lower than what is currently required for admission to most two- and four-year colleges in this country. What this means in effect is that our colleges will become expensive high schools.  

College readiness tests based on Common Core’s standards will play two significant roles.  First, they will guarantee the presence of credit-bearing courses with low academic expectations in mathematics, reading (English), and possibly other freshman subjects. Second, they will change more than the college courses they are enrolled in.  How, we do not yet know. But it seems logical to expect large numbers of relatively low-performing high school students who have been declared college-ready based on a test with low expectations to have an impact on the other students in the college courses they are entitled to enroll in for credit. 

New York Sees a Spike in Regents Exam Failures Five Years After Common Core

New York State Department of Education Building in Albany, NY. Photo credit: Matt H. Wade (CC-By-SA)
New York State Department of Education Building in Albany, NY
Photo credit: Matt H. Wade (CC-By-SA 3.0)

David Rubel, an education policy consultant, released a report that showed a spike in the failure rate of New York students on their math and ELA Regents Exam. This is five years after Common Core.

In his summary he writes:

It’s now five years since the Algebra 1 (Common Core) Regents and Exam was first used in June of 2014. After five years of a transition period, schools should be in a much stronger position to teach the Common Core (now known as the Next Generation Learning Standards). However, this year’s test results show a surprising shift downward with thousands more students failing the Algebra 1 Common Core Regents Exam. At the very least, the number of failing students should stay comparable with pre-Common Core Integrated Algebra Exam. There was also a significant increase in the number of students failing the ELA Regents exam. 

With the math exam he notes:

For reasons that have yet to be determined, last year’s Regents Exam was tougher for thousands of high school students. 13,074 more students failed the Algebra 1 exam this year than in 2016-17. The scoring system did not change so other factors must be in play. Two high need risk groups, students with disabilities and English Language Learners saw more students failing. 61% of students with disabilities group and 60% of English Language Learners are now failing the Algebra 1 Regents exam. Passing a math Regents exam is a requirement for graduation.

Regarding the ELA exam he wrote:

12,456 more students failed the ELA Regents in 2017-18 than in 2016-17; and increase of 6%. For the first two years of the ELA Common Core Exam, the test scores were impressive with a stable first year (2015-16) test results and even less students failing in the second year of test administration (2016-17) than with the old Comprehensive Regents exam. However, the 2017- 18 test scores have thrown a wrench into the transition. The increase in the failing students occurred with both students with disabilities (3,955) and English Language Learners (2,699). 49% of SWD students and 64% of ELL students failed the ELA Regents this year.

I can’t say that I am surprised. NAEP scores have been stagnant and there has been a widening gap between low and high performing students. ACT math scores have declined as well.

HT: The Hechinger Report

New Mexico to Stop Using PARCC

Newly minted New Mexico Governor Michelle Lujan Grisham ordered the state’s department of education to stop using PARCC.

KOB Channel 4 reports:

Lujan Grisham, in an executive order, called on the department to immediately begin working with key stakeholders to identify and implement a more effective, more appropriate and less intrusive method for assessing school performance that is compliant with the federal Every Student Succeeds Act.

The development of this alternative approach, intended to deliver a sounder methodology for the rating and assessments of New Mexico schools, will include teachers, administrators, parents, students and recognized professionals and experts in the field of student assessments.

“This is the first of many steps we will take to transform education in this state,” Gov. Lujan Grisham said. “High-stakes tests like PARCC do our schools a disservice, and we are about empowering our school system. Including those who will be most empowered by a better assessment in the process will help us build something better from the ground up, as opposed to a test mandated from on high.”

In a second executive order, Lujan Grisham called for an end to using PARCC in teacher evaluations. Reaching out to stakeholders in a similar fashion, the department will, under the order, strive to achieve balance in its ratings and assessments by incorporating into its analysis a variety of proven means of measuring teacher efficacy and performance. 

Since New Mexico’s math and ELA standards are Common Core any new test will still be aligned to the standards as required by the Every Student Succeeds Act.

New Mexico’s upcoming departure coupled with New Jersey’s and Maryland’s upcoming exit will drop the Common Core assessment consortium that once boasted 27 partners (including 24 states) down to four.

PARCC’s active partners include the District of Columbia, Illinois (grades 3-8 only), Louisiana (hybrid, grades 3-8 only), New Jersey (plans to withdraw), Maryland (plans to withdraw), Massachusetts (hybrid, grades 3-8 only), New Mexico, Bureau of Indian Affairs, and the Department of Defense Education Activity.

Court Overturns New Jersey’s PARCC Graduation Requirement

A state appellate court ruled unanimously on Monday against the New Jersey Department of Education’s requirement that students pass two assessments before they graduate.

Unfortunately, it is due to the number of tests required, not the requirement itself. The Associated Press reports:

The unanimous decision was made public Monday but won’t take effect for 30 days. It invalidates the state Department of Education’s requirement that students must pass standardized exams —commonly known as the PARCC tests — in Algebra I and English.

The three-judge panel found the requirement — which was approved in 2016 and was due to take effect with the class of 2020 — doesn’t match a state law that requires students to pass just a single test in 11th grade in order to graduate.

“We do not intend to micromanage the administration of the proficiency examination mandated by the (law),” the judges wrote in their 21-page opinion. The 30-day delay for the ruling to take effect gives the state time to appeal to the state Supreme Court if it wants and avoids disrupting any ongoing statewide administration of proficiency examinations.

I would not be surprised if the state argues that PARCC is, in reality, one test, and the Algebra I and English tests, are just sections in that one assessment.

So, I would not get excited that the graduation requirement is gone for good. The PARCC graduation requirement in New Jersey has been an impediment to the opt-out movement in the state. Parents should be able to determine whether their student takes a standardized assessment like this, not the state.

It’s unfortunate the court did not recognize that.

Gov. Jerry Brown, Assessment Control Freak

I don’t think anyone has accused California Governor Jerry Brown of being an advocate for local control, but here’s definitive proof that isn’t the case. He vetoed a bill, AB 1951, last week that would allow local school districts to substitute Smarter Balanced with the SAT or ACT for 11th graders.

Since the vast majority of students who plan to go to college take either one of those college-entrance exams (or both), it is a move that makes sense.

For the record, I believe there should be alternatives to those assessments (I’ve profiled a couple here), and we have also seen colleges drop the assessment requirement altogether

Brown’s answer to this is to require the University of California and California State University systems to accept Smarter Balanced as their college entrance exam.

In his veto message Brown wrote:

This bill requires the Superintendent of Public Instruction to approve one or more nationally recognized high school assessments that a local school may administer in lieu of the state-administered high school summative assessment, commencing with the 2019-20 school year.

Since 2010, California has eliminated standardized testing in grades 9 and 10 and the high school exit exam. While I applaud the author’s efforts to improve student access to college and reduce “testing fatigue” in grade 11, I am not convinced that replacing the state’s high school assessment with the Scholastic Aptitude Test or American College Test achieves that goal.

Our K-12 system and our public universities are now discussing the possible future use of California’s grade 11 state assessment for college admission purposes. This is a better approach to improving access to college for under-represented students and reducing “testing fatigue.”

This “better idea” of Governor Brown’s is not feasible as the author of the bill, Assemblyman Patrick O’Donnell (D-Long Beach), EdSource reports:

Neither system currently does that, but at the request of Kirst, who is president of the State Board of Education, and State Superintendent of Public Instruction Tom Torlakson, a UC administrator wrote in July that the UC would consider whether that would be feasible.

But O’Donnell said that even if CSU and UC were interested, it would take years for them to factor Smarter Balanced scores into their admissions criteria. His bill would have given districts the option of switching to the SAT or ACT in 2019-20.

He said that Brown’s veto message didn’t address his main reason for proposing his bill, which is to alert students of deficits in their skills before their junior year, in addition to encouraging more students to pursue college. Smarter Balanced tests students in 3rd to 8th grades and then 11th grade. It’s not given in 9th and 10th grades, creating a two-year gap. O’Donnell, a middle and high school teacher before his election to the Assembly, said that districts like Long Beach have used the Pre-SAT, starting in 8th grade, to fill in the vacuum of information by identifying what needs to be addressed before students take the SAT.

O’Donnell, who is the Assembly Education Committee Chair, told EdSource he plans to move the bill again next year when there is a new governor.

Maryland Plans to Dump PARCC

Governor Larry Hogan (R-MD)

Maryland plans to replace PARCC with an assessment of their own The Baltimore Sun reports. Maryland’s upcoming departure coupled with New Jersey’s exit will drop the Common Core assessment consortium that once boasted 27 partners (including 24 states) down to five.

PARCC’s active partners include the District of Columbia, Illinois (grades 3-8 only), Louisiana (hybrid, grades 3-8 only), New Jersey (plans to withdraw), Maryland, Massachusetts (hybrid, grades 3-8 only), New Mexico, Bureau of Indian Affairs, and the Department of Defense Education Activity.

Maryland’s schools struggled with PARCC since its implementation. Less than one-half of the state’s students passed in 2017. The Maryland State Board of Education announced last fall they were delaying the requirement that students pass PARCC to graduate.

Liz Bowie for The Baltimore Sun wrote:

The state is seeking bids from contractors to design a new assessment that requires less time to take and grade, but it will not be ready for use until the 2019-2020 school year. So the state will spend another $11 million to continue testing with PARCC this spring.

The impetus for change came from Maryland State Superintendent Karen Salmon and Gov. Larry Hogan, who said he got many complaints.

“Nearly everyone in Maryland — parents, teachers, students and the governor want these tests to end,” Hogan said at a Board of Public Works meeting last month.

As I’ve written before when a state has decided to jettison PARCC or Smarter Balanced, as long as a state continues to use Common Core math and ELA standards they will have a Common Core-aligned assessment. The Every Student Succeeds Act mandates the alignment of a state’s standards and assessment.

Students Want College Board to Rescore June SAT Results

After David Coleman took the helm of The College Board it just seems like they’ve had one controversy after another whether it is the revamping of the SAT or problems with their AP U.S. History and World History Frameworks, they keep having problems.

Now students are protesting the scoring from June’s SAT results.

The News & Observer reports:

Many students who took the SAT exam in June were surprised Wednesday to get back results that they thought were inaccurate because the score was lower than they thought. The College Board, which administers the SAT, told students that because versions of the exam given on different dates are easier than others, they use a statistical process called “equating” to grade the answers on a curve.

“Equating makes sure that a score for a test taken on one date is equivalent to a score from another date,” the College Board tweeted Thursday morning. “So, for example, a single incorrect answer on one administration could equal two or three incorrect answers on a more difficult version. The equating process ensures fairness for all students.”

The College Board’s response didn’t satisfy families who are using the results as part of the college application process. Students and parents took their complaints to social media with the hashtag #rescoreJuneSAT picking up momentum on Twitter.

Read the rest.

They note this comes at a time where the SAT has already lost ground to their rival, ACT.

New Jersey Announces First Steps Away From PARCC

The New Jersey Department of Education announced the first steps to transition away from using PARCC as the state’s annual assessment required under the Every Student Succeeds Act. Governor Phil Murphey (D-NJ) said in January that it was time for the state to get rid of PARCC.

The New Jersey Department of Education (NJDOE) held a two-month, 21-county tour to collect recommendations from a reported 2,300 students, teachers, school administrators, education advocates, and community leaders.

(Parents?)

“Because of a focused, concentrated effort to reach out to New Jersey residents and to give them a voice at the table, we are on a clear path away from PARCC,” Murphy said in a released statement. “By making the transition in phases, we can ensure a smooth implementation in schools across the state and maintain compliance with current state and federal requirements.”

“A stronger, fairer New Jersey means one that prioritizes outreach and collaboration when making policy decisions,” said Education Commissioner Lamont O. Repollet in a statement for the NJDOE press release. “My staff and I went on a listening tour across the state to ensure that we understood the scope of interest, and we moved forward having considered the needs of students, educators, and broader community members in building the next generation assessment system by New Jersey, for New Jersey.”

NJDOE says the transition will occur over multiple stages, and PARCC will not be fully replaced until the 2020-2021 school year.

NJDOE, upon New Jersey State Board approval, plans to reduce the number of required tests for graduation from high school from six to two. They also plan to provide flexibility for first-year English learners on the English language proficiency test. They also plan to ensure that educators and parents receive test data in a timely manner. Currently, that data is not provided until after the school year ends.

They also plan to immediately reduce the length of testing for all grades by 25 percent and reduce the weight of the assessment on teacher evaluations.

Parental opt-outs were not addressed.

You can read the report and draft regulations. NJDOE says they will start the second phase of assessment outreach this summer that will continue through the 2018-2019 school year that will focus on the “more complicated questions and issues” addressed during their tour.