New Istation tool underwhelms for North Carolina’s dyslexic children

  • 32
  • 5
  •  
  •  
  •  
    37
    Shares

The controversy over Superintendent Mark Johnson’s unilateral adoption of the Istation K-3 reading diagnostic tool against the wishes of a team of experts continues to swirl as the public waits to see if the superintendent will release related records as required by law.

One reason the backlash has been so intense has to do with Istation’s limitations as a screener for dyslexia and other specific learning disabilities.  

Identifying reading difficulties as early as possible is crucial in allowing schools to provide the targeted interventions necessary for student success.  Effective screening for dyslexia requires having a reader produce sounds and read words to determine whether phonological processing problems are present.  

Because Istation is an online tool, its phonological awareness measure is limited to having students listen to a sound and match it with an answer choice instead of actually segmenting individual sounds out from a spoken word by producing it on their own. This approach is inconsistent with the International Dyslexia Association’s recommendations on dyslexia assessment and appears to fall short of meeting the North Carolina General Assembly’s mandate that students “receive the necessary and appropriate screenings” as well.  The mClass tool which has been in statewide use since 2013 aligns much more closely with those requirements.

For its part, Istation continues to claim its tool is “capable of identifying and supporting students with learning disabilities” including dyslexia.  

Now more detailed claims by both Istation and Amplify–parent company of mClass–have been provided by the North Carolina Department of Information Technology.  Actually, detailed is probably not the right word to use to describe Istation’s claims.

In the original RFP, both companies were asked by the Department of Public Instruction to provide a description of how measures “adequately and accurately identify indicators of risk for dyslexia in grades K-3.” 

Here’s the description Amplify provided in its Response to RFP:

Amplify and the co-authors of DIBELS Next support the use of a universal screener (the DIBELS Next measures assess decoding, accuracy, fluency, and comprehension) for identifying students with reading difficulties, including students at risk of dyslexia. Amplify has developed additional screening measures of Vocabulary, Spelling and Rapid Automatized Naming (RAN), that may also be administered to obtain additional information on risk for dyslexia and possible impact of difficulty on related skills.

The Vocabulary(VOC) measure is an indicator of a student’s general vocabulary knowledge. It assesses a student’s depth of knowledge of a set of grade level high frequency and high utility words that are used across domains (Tier 2 words; Beck, McKeown, & Kucan, 2002) and content specific words and whether a student has strategies for making meaning of words encountered in text. It incorporates a variety of tasks that vary across grade levels and allow for assessing multiple dimensions of vocabulary knowledge across multiple contexts. The tasks included in each grade level are those that produced the most reliable results for that grade. The number of items increases in each grade level as students’ ability to sustain attention increases with age.

The first task administered to all students is the two-question vocabulary task adapted from the work of Kearns & Biemiller (2010). In this task, students are asked two yes or no questions about a target word. This requires deeper knowledge of a word than many traditional tasks as the same word is included in two contexts or questions. Evidence of the reliability, validity and sensitivity to differences in students vocabulary skills has been documented (see Kearns & Biemiller, 2010). In grades 1 through 3, students also complete traditional fill in the blank questions. They read a sentence with a missing word and select the word that best completes the sentence from a set of four words. Distractor or incorrect responses for each item include words that look or sound similar to the target word, words that mean almost the same thing as the target word but are not correct in the context of the sentence, or are related to the target word or sentence context but are not correct in the context of the full sentence.

Finally, grades 2 and 3, students complete items that require matching a word to its basic definition. The words included in this question type are words that are included in earlier portions of the assessment. The purpose is to see if a student has basic knowledge of the definitions of these same words. 

The Spelling measure is an indicator of a student’s level of general spelling skills. It is designed based on the principles of General Outcome Measurement and Curriculum-Based Measurement (CBM; Deno, 1992). Assessments from this approach are designed to efficiently screen for students who are at-risk for difficulty – they are brief assessments of critical skills that are sensitive to student learning and growth overall. CBM measures do not assess all skills within a domain but provide a snapshot of a student’s skills in a given area using tasks that are instructionally useful and can be reliably administered (Deno, 2003).

The Amplify spelling measure incorporates the key features of CBM Spelling measure design and administration. It is administered on a computer or tablet so typical procedures for administering the measure were modified to fit the software environment. The target word is spoken (by the computer) and the student uses letter tiles to spell the word. Both correct letter sequences (CLS) and words spelled correctly (WSC) are calculated.

The Rapid Automatized Naming (RAN) measure indicates how quickly students can name aloud numeric symbols. While there is not strong agreement in the field of exactly which cognitive processes RAN is measuring, a large body of evidence has documented RAN as one of the best predictors of overall reading skill, including word reading, text reading, reading fluency, and comprehension and an area difficulty for students with reading disabilities (Araujo, Reis, Petersson, & Faisca, 2015). Deficits in rapid automatized naming have also been shown to be a robust indicator of risk for dyslexia in children (Gaab, 2017). 

Because of the strong predictive relationship that RAN displays with tasks that measure various reading skills, researchers hypothesize that completion of RAN tasks requires the coordination of multiple processes. “The seemingly simple task of naming a series of familiar items as quickly as possible appears to invoke a microcosm of the later developing, more elaborated reading circuit” (Norton & Wolf, 2012, p. 427). The full circuit requires coordination of attention, working memory, visual processing, phonological processing, etc., individual processes also required for reading. 

The addition of these measures is in line with the definition of dyslexia included in NC HB 149. These measures provide additional information about a student’s processing, spelling, and decoding abilities. In addition, the Vocabulary screener and the existing mCLASS:Reading 3D measures allow teachers to continue to understand the overall reading skills of students and potential “secondary consequences” or problems in reading comprehension or vocabulary as described by North Carolina’s definition of dyslexia. The addition of these measures in the screening process aids in the development and implementation of targeted interventions with ongoing progress monitoring through a multi-tiered system of supports. Progress monitoring data should then be used to determine whether additional assessment and evaluation are needed for the student.

The Vocabulary measure is administered on the computer. Each item and all answer options are spoken (by the computer) to the students who then select their answer choice. The Rapid Automatized Naming and Spelling measures are administered on the online student testing platform. To administer this assessment, an educator enables the measures from Online Assessment Management portal then logs into the student account to launch the RAN and Spelling assessment with the individual student. Both the teacher and student can view the screen and listen to the audio prompts that guide the student through a model, practice, and each assessment item. The teacher controls the input device and selects student responses.

For students with adequate self-regulation and computer skills, teacher assistance may not be needed for Spelling measure and students can interact directly to enter their response.

A student may be flagged as demonstrating additional risk for reading difficulty including dyslexia when the DIBELS Next composite result is in the Well Below Benchmark range and results from either the Spelling and RAN measures are in the Well Below Benchmark range. Educators will have the option to include this information in the Home Connect Letter to share the student’s performance on these additional measures with parents. Please see Appendix G (pgs. G-1 – G-23) for more information on the Vocabulary, Spelling, and RAN measures.  

Now take a look at the “meh, we got that covered” response from Istation:

That’s literally all they had to say about it.  And somehow, it was enough for Mark Johnson.

  •  
    37
    Shares
  • 32
  • 5
  •  
  •  

Leave a Reply

Your email address will not be published. Required fields are marked *