Lately, I’ve been seeing more and more posts on social media asking for testing suggestions for students who exhibit subtle language-based difficulties. Many of these children are typically referred for initial assessments or reassessments as part of advocate/attorney involved cases, while others are being assessed due to the parental insistence that something “is not quite right” with their language and literacy abilities, even in the presence of “good grades.” Continue reading Comprehensive Assessment of Elementary Aged Children with Subtle Language and Literacy Deficits
Category: APD
Help, My Child is Receiving All These Therapies But It’s NOT Helping
On a daily basis I receive emails and messages from concerned parents and professionals, which read along these lines: “My child/student has been diagnosed with: dyslexia, ADHD, APD etc., s/he has been receiving speech, OT, vision, biofeedback, music therapies, etc. but nothing seems to be working.”
Up until now, I have been providing individualized responses to such queries, however, given the unnerving similarity of all the received messages, today I decided to write this post, so other individuals with similar concerns can see my response. Continue reading Help, My Child is Receiving All These Therapies But It’s NOT Helping
How Early can “Dyslexia” be Diagnosed in Children?
In recent years there has been a substantial rise in awareness pertaining to reading disorders in young school-aged children. Consequently, more and more parents and professionals are asking questions regarding how early can “dyslexia” be diagnosed in children.
In order to adequately answer this question, it is important to understand the trajectory of development of literacy disorders in children. Continue reading How Early can “Dyslexia” be Diagnosed in Children?
It’s All Due to …Language: How Subtle Symptoms Can Cause Serious Academic Deficits
Scenario: Len is a 7-2-year-old, 2nd-grade student who struggles with reading and writing in the classroom. He is very bright and has a high average IQ, yet when he is speaking he frequently can’t get his point across to others due to excessive linguistic reformulations and word-finding difficulties. The problem is that Len passed all the typical educational and language testing with flying colors, receiving average scores across the board on various tests including the Woodcock-Johnson Fourth Edition (WJ-IV) and the Clinical Evaluation of Language Fundamentals-5 (CELF-5). Stranger still is the fact that he aced Comprehensive Test of Phonological Processing, Second Edition (CTOPP-2), with flying colors, so he is not even eligible for a “dyslexia” diagnosis. Len is clearly struggling in the classroom with coherently expressing self, telling stories, understanding what he is reading, as well as putting his thoughts on paper. His parents have compiled impressively huge folders containing examples of his struggles. Yet because of his performance on the basic standardized assessment batteries, Len does not qualify for any functional assistance in the school setting, despite being virtually functionally illiterate in second grade.
The truth is that Len is quite a familiar figure to many SLPs, who at one time or another have encountered such a student and asked for guidance regarding the appropriate accommodations and services for him on various SLP-geared social media forums. But what makes Len such an enigma, one may inquire? Surely if the child had tangible deficits, wouldn’t standardized testing at least partially reveal them?
Well, it all depends really, on what type of testing was administered to Len in the first place. A few years ago I wrote a post entitled: “What Research Shows About the Functional Relevance of Standardized Language Tests“. What researchers found is that there is a “lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores” (Betz et al, 2012, 141). Furthermore, they also found that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one-word vocabulary tests such as the Peabody Picture Vocabulary Test”. Most damaging finding was the fact that: “frequently SLPs did not follow up the comprehensive standardized testing with domain-specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure”.(Betz et al, 2012, 140)
In other words, many SLPs only use the tests at hand rather than the RIGHT tests aimed at identifying the student’s specific deficits. But the problem doesn’t actually stop there. Due to the variation in psychometric properties of various tests, many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services.
Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (Spaulding, Plante & Farinella, 2006, 69).”
There’s of course yet another factor affecting our hypothetical client and that is his relatively young age. This is especially evident with many educational and language testing for children in the 5-7 age group. Because the bar is set so low, concept-wise for these age-groups, many children with moderate language and literacy deficits can pass these tests with flying colors, only to be flagged by them literally two years later and be identified with deficits, far too late in the game. Coupled with the fact that many SLPs do not utilize non-standardized measures to supplement their assessments, Len is in a pretty serious predicament.
But what if there was a do-over? What could we do differently for Len to rectify this situation? For starters, we need to pay careful attention to his deficits profile in order to choose appropriate tests to evaluate his areas of needs. The above can be accomplished via a number of ways. The SLP can interview Len’s teacher and his caregiver/s in order to obtain a summary of his pressing deficits. Depending on the extent of the reported deficits the SLP can also provide them with a referral checklist to mark off the most significant areas of need.
In Len’s case, we already have a pretty good idea regarding what’s going on. We know that he passed basic language and educational testing, so in the words of Dr. Geraldine Wallach, we need to keep “peeling the onion” via the administration of more sensitive tests to tap into Len’s reported areas of deficits which include: word-retrieval, narrative production, as well as reading and writing.
For that purpose, Len is a good candidate for the administration of the Test of Integrated Language and Literacy (TILLS), which was developed to identify language and literacy disorders, has good psychometric properties, and contains subtests for assessment of relevant skills such as reading fluency, reading comprehension, phonological awareness, spelling, as well as writing in school-age children.
Given Len’s reported history of narrative production deficits, Len is also a good candidate for the administration of the Social Language Development Test Elementary (SLDTE). Here’s why. Research indicates that narrative weaknesses significantly correlate with social communication deficits (Norbury, Gemmell & Paul, 2014). As such, it’s not just children with Autism Spectrum Disorders who present with impaired narrative abilities. Many children with developmental language impairment (DLD) (#devlangdis) can present with significant narrative deficits affecting their social and academic functioning, which means that their social communication abilities need to be tested to confirm/rule out presence of these difficulties.
However, standardized tests are not enough, since even the best-standardized tests have significant limitations. As such, several non-standardized assessments in the areas of narrative production, reading, and writing, may be recommended for Len to meaningfully supplement his testing.
Let’s begin with an informal narrative assessment which provides detailed information regarding microstructural and macrostructural aspects of storytelling as well as child’s thought processes and socio-emotional functioning. My nonstandardized narrative assessments are based on the book elicitation recommendations from the SALT website. For 2nd graders, I use the book by Helen Lester entitled Pookins Gets Her Way. I first read the story to the child, then cover up the words and ask the child to retell the story based on pictures. I read the story first because: “the model narrative presents the events, plot structure, and words that the narrator is to retell, which allows more reliable scoring than a generated story that can go in many directions” (Allen et al, 2012, p. 207).
As the child is retelling his story I digitally record him using the Voice Memos application on my iPhone, for a later transcription and thorough analysis. During storytelling, I only use the prompts: ‘What else can you tell me?’ and ‘Can you tell me more?’ to elicit additional information. I try not to prompt the child excessively since I am interested in cataloging all of his narrative-based deficits. After I transcribe the sample, I analyze it and make sure that I include the transcription and a detailed write-up in the body of my report, so parents and professionals can see and understand the nature of the child’s errors/weaknesses.
Now we are ready to move on to a brief nonstandardized reading assessment. For this purpose, I often use the books from the Continental Press series entitled: Reading for Comprehension, which contains books for grades 1-8. After I confirm with either the parent or the child’s teacher that the selected passage is reflective of the complexity of work presented in the classroom for his grade level, I ask the child to read the text. As the child is reading, I calculate the correct number of words he reads per minute as well as what type of errors the child is exhibiting during reading. Then I ask the child to state the main idea of the text, summarize its key points as well as define select text embedded vocabulary words and answer a few, verbally presented reading comprehension questions. After that, I provide the child with accompanying 5 multiple choice question worksheet and ask the child to complete it. I analyze my results in order to determine whether I have accurately captured the child’s reading profile.
Finally, if any additional information is needed, I administer a nonstandardized writing assessment, which I base on the Common Core State Standards for 2nd grade. For this task, I provide a student with a writing prompt common for second grade and give him a period of 15-20 minutes to generate a writing sample. I then analyze the writing sample with respect to contextual conventions (punctuation, capitalization, grammar, and syntax) as well as story composition (overall coherence and cohesion of the written sample).
The above relatively short assessment battery (2 standardized tests and 3 informal assessment tasks) which takes approximately 2-2.5 hours to administer, allows me to create a comprehensive profile of the child’s language and literacy strengths and needs. It also allows me to generate targeted goals in order to begin effective and meaningful remediation of the child’s deficits.
Children like Len will, unfortunately, remain unidentified unless they are administered more sensitive tasks to better understand their subtle pattern of deficits. Consequently, to ensure that they do not fall through the cracks of our educational system due to misguided overreliance on a limited number of standardized assessments, it is very important that professionals select the right assessments, rather than the assessments at hand, in order to accurately determine the child’s areas of needs.
References:
- Allen, M, Ukrainetz, T & Carswell, A (2012) The narrative language performance of three types of at-risk first-grade readers. Language, Speech, and Hearing Services in Schools, 43(2), 205-221.
- Betz et al. (2013) Factors Influencing the Selection of Standardized Tests for the Diagnosis of Specific Language Impairment. Language, Speech, and Hearing Services in Schools, 44, 133-146.
- Hasbrouck, J. & Tindal, G. A. (2006). Oral reading fluency norms: A valuable assessment tool for reading teachers. The Reading Teacher. 59(7), 636-644.).
- Norbury, C. F., Gemmell, T., & Paul, R. (2014). Pragmatics abilities in narrative production: a cross-disorder comparison. Journal of child language, 41(03), 485-510.
- Peña, E.D., Spaulding, T.J., & Plante, E. (2006). The Composition of Normative Groups and Diagnostic Decision Making: Shooting Ourselves in the Foot. American Journal of Speech-Language Pathology, 15, 247-254.
- Spaulding, Plante & Farinella (2006) Eligibility Criteria for Language Impairment: Is the Low End of Normal Always Appropriate? Language, Speech, and Hearing Services in Schools, 37, 61-72.
- Spaulding, Szulga, & Figueria (2012) Using Norm-Referenced Tests to Determine Severity of Language Impairment in Children: Disconnect Between U.S. Policy Makers and Test Developers. Journal of Speech, Language and Hearing Research. 43, 176-190.
APD Update: New Developments on an Old Controversy
In the past two years, I wrote a series of research-based posts (HERE and HERE) regarding the validity of (Central) Auditory Processing Disorder (C/APD) as a standalone diagnosis as well as questioned the utility of it for classification purposes in the school setting.
Once again I want to reiterate that I was in no way disputing the legitimate symptoms (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.), which the students diagnosed with “CAPD” were presenting with.
Rather, I was citing research to indicate that these symptoms were indicative of broader linguistic-based deficits, which required targeted linguistic/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.), or mere accommodations.
I was also significantly concerned that overfocus on the diagnosis of (C)APD tended to obscure REAL, language-based deficits in children and forced SLPs to address erroneous therapeutic targets based on AuD recommendations or restricted them to a receipt of mere accommodations rather than rightful therapeutic remediation. Continue reading APD Update: New Developments on an Old Controversy
Review of the Test of Integrated Language and Literacy (TILLS)
The Test of Integrated Language & Literacy Skills (TILLS) is an assessment of oral and written language abilities in students 6–18 years of age. Published in the Fall 2015, it is unique in the way that it is aimed to thoroughly assess skills such as reading fluency, reading comprehension, phonological awareness, spelling, as well as writing in school age children. As I have been using this test since the time it was published, I wanted to take an opportunity today to share just a few of my impressions of this assessment.
First, a little background on why I chose to purchase this test so shortly after I had purchased the Clinical Evaluation of Language Fundamentals – 5 (CELF-5). Soon after I started using the CELF-5 I noticed that it tended to considerably overinflate my students’ scores on a variety of its subtests. In fact, I noticed that unless a student had a fairly severe degree of impairment, the majority of his/her scores came out either low/slightly below average (click for more info on why this was happening HERE, HERE, or HERE). Consequently, I was excited to hear regarding TILLS development, almost simultaneously through ASHA as well as SPELL-Links ListServe. I was particularly happy because I knew some of this test’s developers (e.g., Dr. Elena Plante, Dr. Nickola Nelson) have published solid research in the areas of psychometrics and literacy respectively.
According to the TILLS developers it has been standardized for 3 purposes:
- to identify language and literacy disorders
- to document patterns of relative strengths and weaknesses
- to track changes in language and literacy skills over time
The testing subtests can be administered in isolation (with the exception of a few) or in its entirety. The administration of all the 15 subtests may take approximately an hour and a half, while the administration of the core subtests typically takes ~45 mins).
Please note that there are 5 subtests that should not be administered to students 6;0-6;5 years of age because many typically developing students are still mastering the required skills.
- Subtest 5 – Nonword Spelling
- Subtest 7 – Reading Comprehension
- Subtest 10 – Nonword Reading
- Subtest 11 – Reading Fluency
- Subtest 12 – Written Expression
However, if needed, there are several tests of early reading and writing abilities which are available for assessment of children under 6:5 years of age with suspected literacy deficits (e.g., TERA-3: Test of Early Reading Ability–Third Edition; Test of Early Written Language, Third Edition-TEWL-3, etc.).
Let’s move on to take a deeper look at its subtests. Please note that for the purposes of this review all images came directly from and are the property of Brookes Publishing Co (clicking on each of the below images will take you directly to their source).
1. Vocabulary Awareness (VA) (description above) requires students to display considerable linguistic and cognitive flexibility in order to earn an average score. It works great in teasing out students with weak vocabulary knowledge and use, as well as students who are unable to quickly and effectively analyze words for deeper meaning and come up with effective definitions of all possible word associations. Be mindful of the fact that even though the words are presented to the students in written format in the stimulus book, the examiner is still expected to read all the words to the students. Consequently, students with good vocabulary knowledge and strong oral language abilities can still pass this subtest despite the presence of significant reading weaknesses. Recommendation: I suggest informally checking the student’s word reading abilities by asking them to read of all the words, before reading all the word choices to them. This way you can informally document any word misreadings made by the student even in the presence of an average subtest score.
2. The Phonemic Awareness (PA) subtest (description above) requires students to isolate and delete initial sounds in words of increasing complexity. While this subtest does not require sound isolation and deletion in various word positions, similar to tests such as the CTOPP-2: Comprehensive Test of Phonological Processing–Second Edition or the The Phonological Awareness Test 2 (PAT 2), it is still a highly useful and reliable measure of phonemic awareness (as one of many precursors to reading fluency success). This is especially because after the initial directions are given, the student is expected to remember to isolate the initial sounds in words without any prompting from the examiner. Thus, this task also indirectly tests the students’ executive function abilities in addition to their phonemic awareness skills.
3. The Story Retelling (SR) subtest (description above) requires students to do just that retell a story. Be mindful of the fact that the presented stories have reduced complexity. Thus, unless the students possess significant retelling deficits, the above subtest may not capture their true retelling abilities. Recommendation: Consider supplementing this subtest with informal narrative measures. For younger children (kindergarten and first grade) I recommend using wordless picture books to perform a dynamic assessment of their retelling abilities following a clinician’s narrative model (e.g., HERE). For early elementary aged children (grades 2 and up), I recommend using picture books, which are first read to and then retold by the students with the benefit of pictorial but not written support. Finally, for upper elementary aged children (grades 4 and up), it may be helpful for the students to retell a book or a movie seen recently (or liked significantly) by them without the benefit of visual support all together (e.g., HERE).
4. The Nonword Repetition (NR) subtest (description above) requires students to repeat nonsense words of increasing length and complexity. Weaknesses in the area of nonword repetition have consistently been associated with language impairments and learning disabilities due to the task’s heavy reliance on phonological segmentation as well as phonological and lexical knowledge (Leclercq, Maillart, Majerus, 2013). Thus, both monolingual and simultaneously bilingual children with language and literacy impairments will be observed to present with patterns of segment substitutions (subtle substitutions of sounds and syllables in presented nonsense words) as well as segment deletions of nonword sequences more than 2-3 or 3-4 syllables in length (depending on the child’s age).
5. The Nonword Spelling (NS) subtest (description above) requires the students to spell nonwords from the Nonword Repetition (NR) subtest. Consequently, the Nonword Repetition (NR) subtest needs to be administered prior to the administration of this subtest in the same assessment session. In contrast to the real-word spelling tasks, students cannot memorize the spelling of the presented words, which are still bound by orthographic and phonotactic constraints of the English language. While this is a highly useful subtest, is important to note that simultaneously bilingual children may present with decreased scores due to vowel errors. Consequently, it is important to analyze subtest results in order to determine whether dialectal differences rather than a presence of an actual disorder is responsible for the error patterns.
6. The Listening Comprehension (LC) subtest (description above) requires the students to listen to short stories and then definitively answer story questions via available answer choices, which include: “Yes”, “No’, and “Maybe”. This subtest also indirectly measures the students’ metalinguistic awareness skills as they are needed to detect when the text does not provide sufficient information to answer a particular question definitively (e.g., “Maybe” response may be called for). Be mindful of the fact that because the students are not expected to provide sentential responses to questions it may be important to supplement subtest administration with another listening comprehension assessment. Tests such as the Listening Comprehension Test-2 (LCT-2), the Listening Comprehension Test-Adolescent (LCT-A), or the Executive Function Test-Elementary (EFT-E) may be useful if language processing and listening comprehension deficits are suspected or reported by parents or teachers. This is particularly important to do with students who may be ‘good guessers’ but who are also reported to present with word-finding difficulties at sentence and discourse levels.
7. The Reading Comprehension (RC) subtest (description above) requires the students to read short story and answer story questions in “Yes”, “No’, and “Maybe” format. This subtest is not stand alone and must be administered immediately following the administration the Listening Comprehension subtest. The student is asked to read the first story out loud in order to determine whether s/he can proceed with taking this subtest or discontinue due to being an emergent reader. The criterion for administration of the subtest is making 7 errors during the reading of the first story and its accompanying questions. Unfortunately, in my clinical experience this subtest is not always accurate at identifying children with reading-based deficits.
While I find it terrific for students with severe-profound reading deficits and/or below average IQ, a number of my students with average IQ and moderately impaired reading skills managed to pass it via a combination of guessing and luck despite being observed to misread aloud between 40-60% of the presented words. Be mindful of the fact that typically such students may have up to 5-6 errors during the reading of the first story. Thus, according to administration guidelines these students will be allowed to proceed and take this subtest. They will then continue to make text misreadings during each story presentation (you will know that by asking them to read each story aloud vs. silently). However, because the response mode is in definitive (“Yes”, “No’, and “Maybe”) vs. open ended question format, a number of these students will earn average scores by being successful guessers. Recommendation: I highly recommend supplementing the administration of this subtest with grade level (or below grade level) texts (see HERE and/or HERE), to assess the student’s reading comprehension informally.
I present a full one page text to the students and ask them to read it to me in its entirety. I audio/video record the student’s reading for further analysis (see Reading Fluency section below). After the completion of the story I ask the student questions with a focus on main idea comprehension and vocabulary definitions. I also ask questions pertaining to story details. Depending on the student’s age I may ask them abstract/ factual text questions with and without text access. Overall, I find that informal administration of grade level (or even below grade-level) texts coupled with the administration of standardized reading tests provides me with a significantly better understanding of the student’s reading comprehension abilities rather than administration of standardized reading tests alone.
8. The Following Directions (FD) subtest (description above) measures the student’s ability to execute directions of increasing length and complexity. It measures the student’s short-term, immediate and working memory, as well as their language comprehension. What is interesting about the administration of this subtest is that the graphic symbols (e.g., objects, shapes, letter and numbers etc.) the student is asked to modify remain covered as the instructions are given (to prevent visual rehearsal). After being presented with the oral instruction the students are expected to move the card covering the stimuli and then to executive the visual-spatial, directional, sequential, and logical if–then the instructions by marking them on the response form. The fact that the visual stimuli remains covered until the last moment increases the demands on the student’s memory and comprehension. The subtest was created to simulate teacher’s use of procedural language (giving directions) in classroom setting (as per developers).
9. The Delayed Story Retelling (DSR) subtest (description above) needs to be administered to the students during the same session as the Story Retelling (SR) subtest, approximately 20 minutes after the SR subtest administration. Despite the relatively short passage of time between both subtests, it is considered to be a measure of long-term memory as related to narrative retelling of reduced complexity. Here, the examiner can compare student’s performance to determine whether the student did better or worse on either of these measures (e.g., recalled more information after a period of time passed vs. immediately after being read the story). However, as mentioned previously, some students may recall this previously presented story fairly accurately and as a result may obtain an average score despite a history of teacher/parent reported long-term memory limitations. Consequently, it may be important for the examiner to supplement the administration of this subtest with a recall of a movie/book recently seen/read by the student (a few days ago) in order to compare both performances and note any weaknesses/limitations.
10. The Nonword Reading (NR) subtest (description above) requires students to decode nonsense words of increasing length and complexity. What I love about this subtest is that the students are unable to effectively guess words (as many tend to routinely do when presented with real words). Consequently, the presentation of this subtest will tease out which students have good letter/sound correspondence abilities as well as solid orthographic, morphological and phonological awareness skills and which ones only memorized sight words and are now having difficulty decoding unfamiliar words as a result.
11. The Reading Fluency (RF) subtest (description above) requires students to efficiently read facts which make up simple stories fluently and correctly. Here are the key to attaining an average score is accuracy and automaticity. In contrast to the previous subtest, the words are now presented in meaningful simple syntactic contexts.
It is important to note that the Reading Fluency subtest of the TILLS has a negatively skewed distribution. As per authors, “a large number of typically developing students do extremely well on this subtest and a much smaller number of students do quite poorly.”
Thus, “the mean is to the left of the mode” (see publisher’s image below). This is why a student could earn an average standard score (near the mean) and a low percentile rank when true percentiles are used rather than NCE percentiles (Normal Curve Equivalent).
Consequently under certain conditions (See HERE) the percentile rank (vs. the NCE percentile) will be a more accurate representation of the student’s ability on this subtest.
Indeed, due to the reduced complexity of the presented words some students (especially younger elementary aged) may obtain average scores and still present with serious reading fluency deficits.
I frequently see that in students with average IQ and go to long-term memory, who by second and third grades have managed to memorize an admirable number of sight words due to which their deficits in the areas of reading appeared to be minimized. Recommendation: If you suspect that your student belongs to the above category I highly recommend supplementing this subtest with an informal measure of reading fluency. This can be done by presenting to the student a grade level text (I find science and social studies texts particularly useful for this purpose) and asking them to read several paragraphs from it (see HERE and/or HERE).
As the students are reading I calculate their reading fluency by counting the number of words they read per minute. I find it very useful as it allows me to better understand their reading profile (e.g, fast/inaccurate reader, slow/inaccurate reader, slow accurate reader, fast/accurate reader). As the student is reading I note their pauses, misreadings, word-attack skills and the like. Then, I write a summary comparing the students reading fluency on both standardized and informal assessment measures in order to document students strengths and limitations.
12. The Written Expression (WE) subtest (description above) needs to be administered to the students immediately after the administration of the Reading Fluency (RF) subtest because the student is expected to integrate a series of facts presented in the RF subtest into their writing sample. There are 4 stories in total for the 4 different age groups.
The examiner needs to show the student a different story which integrates simple facts into a coherent narrative. After the examiner reads that simple story to the students s/he is expected to tell the students that the story is okay, but “sounds kind of “choppy.” They then need to show the student an example of how they could put the facts together in a way that sounds more interesting and less choppy by combining sentences (see below). Finally, the examiner will ask the students to rewrite the story presented to them in a similar manner (e.g, “less choppy and more interesting.”)
After the student finishes his/her story, the examiner will analyze it and generate the following scores: a discourse score, a sentence score, and a word score. Detailed instructions as well as the Examiner’s Practice Workbook are provided to assist with scoring as it takes a bit of training as well as trial and error to complete it, especially if the examiners are not familiar with certain procedures (e.g., calculating T-units).
Full disclosure: Because the above subtest is still essentially sentence combining, I have only used this subtest a handful of times with my students. Typically when I’ve used it in the past, most of my students fell in two categories: those who failed it completely by either copying text word for word, failing to generate any written output etc. or those who passed it with flying colors but still presented with notable written output deficits. Consequently, I’ve replaced Written Expression subtest administration with the administration of written standardized tests, which I supplement with an informal grade level expository, persuasive, or narrative writing samples.
Having said that many clinicians may not have the access to other standardized written assessments, or lack the time to administer entire standardized written measures (which may frequently take between 60 to 90 minutes of administration time). Consequently, in the absence of other standardized writing assessments, this subtest can be effectively used to gauge the student’s basic writing abilities, and if needed effectively supplemented by informal writing measures (mentioned above).
13. The Social Communication (SC) subtest (description above) assesses the students’ ability to understand vocabulary associated with communicative intentions in social situations. It requires students to comprehend how people with certain characteristics might respond in social situations by formulating responses which fit the social contexts of those situations. Essentially students become actors who need to act out particular scenes while viewing select words presented to them.
Full disclosure: Similar to my infrequent administration of the Written Expression subtest, I have also administered this subtest very infrequently to students. Here is why.
I am an SLP who works full-time in a psychiatric hospital with children diagnosed with significant psychiatric impairments and concomitant language and literacy deficits. As a result, a significant portion of my job involves comprehensive social communication assessments to catalog my students’ significant deficits in this area. Yet, past administration of this subtest showed me that number of my students can pass this subtest quite easily despite presenting with notable and easily evidenced social communication deficits. Consequently, I prefer the administration of comprehensive social communication testing when working with children in my hospital based program or in my private practice, where I perform independent comprehensive evaluations of language and literacy (IEEs).
Again, as I’ve previously mentioned many clinicians may not have the access to other standardized social communication assessments, or lack the time to administer entire standardized written measures. Consequently, in the absence of other social communication assessments, this subtest can be used to get a baseline of the student’s basic social communication abilities, and then be supplemented with informal social communication measures such as the Informal Social Thinking Dynamic Assessment Protocol (ISTDAP) or observational social pragmatic checklists.
14. The Digit Span Forward (DSF) subtest (description above) is a relatively isolated measure of short term and verbal working memory ( it minimizes demands on other aspects of language such as syntax or vocabulary).
15. The Digit Span Backward (DSB) subtest (description above) assesses the student’s working memory and requires the student to mentally manipulate the presented stimuli in reverse order. It allows examiner to observe the strategies (e.g. verbal rehearsal, visual imagery, etc.) the students are using to aid themselves in the process. Please note that the Digit Span Forward subtest must be administered immediately before the administration of this subtest.
SLPs who have used tests such as the Clinical Evaluation of Language Fundamentals – 5 (CELF-5) or the Test of Auditory Processing Skills – Third Edition (TAPS-3) should be highly familiar with both subtests as they are fairly standard measures of certain aspects of memory across the board.
To continue, in addition to the presence of subtests which assess the students literacy abilities, the TILLS also possesses a number of interesting features.
For starters, the TILLS Easy Score, which allows the examiners to use their scoring online. It is incredibly easy and effective. After clicking on the link and filling out the preliminary demographic information, all the examiner needs to do is to plug in this subtest raw scores, the system does the rest. After the raw scores are plugged in, the system will generate a PDF document with all the data which includes (but is not limited to) standard scores, percentile ranks, as well as a variety of composite and core scores. The examiner can then save the PDF on their device (laptop, PC, tablet etc.) for further analysis.
The there is the quadrant model. According to the TILLS sampler (HERE) “it allows the examiners to assess and compare students’ language-literacy skills at the sound/word level and the sentence/ discourse level across the four oral and written modalities—listening, speaking, reading, and writing” and then create “meaningful profiles of oral and written language skills that will help you understand the strengths and needs of individual students and communicate about them in a meaningful way with teachers, parents, and students. (pg. 21)”
Then there is the Student Language Scale (SLS) which is a one page checklist parents, teachers (and even students) can fill out to informally identify language and literacy based strengths and weaknesses. It allows for meaningful input from multiple sources regarding the students performance (as per IDEA 2004) and can be used not just with TILLS but with other tests or in even isolation (as per developers).
Furthermore according to the developers, because the normative sample included several special needs populations, the TILLS can be used with students diagnosed with ASD, deaf or hard of hearing (see caveat), as well as intellectual disabilities (as long as they are functioning age 6 and above developmentally).
According to the developers the TILLS is aligned with Common Core Standards and can be administered as frequently as two times a year for progress monitoring (min of 6 mos post 1st administration).
With respect to bilingualism examiners can use it with caution with simultaneous English learners but not with sequential English learners (see further explanations HERE). Translations of TILLS are definitely not allowed as they will undermine test validity and reliability.
So there you have it these are just some of my very few impressions regarding this test. Now to some of you may notice that I spend a significant amount of time pointing out some of the tests limitations. However, it is very important to note that we have research that indicates that there is no such thing as a “perfect standardized test” (see HERE for more information). All standardized tests have their limitations.
Having said that, I think that TILLS is a PHENOMENAL addition to the standardized testing market, as it TRULY appears to assess not just language but also literacy abilities of the students on our caseloads.
That’s all from me; however, before signing off I’d like to provide you with more resources and information, which can be reviewed in reference to TILLS. For starters, take a look at Brookes Publishing TILLS resources. These include (but are not limited to) TILLS FAQ, TILLS Easy-Score, TILLS Correction Document, as well as 3 FREE TILLS Webinars. There’s also a Facebook Page dedicated exclusively to TILLS updates (HERE).
But that’s not all. Dr. Nelson and her colleagues have been tirelessly lecturing about the TILLS for a number of years, and many of their past lectures and presentations are available on the ASHA website as well as on the web (e.g., HERE, HERE, HERE, etc). Take a look at them as they contain far more in-depth information regarding the development and implementation of this groundbreaking assessment.
To access TILLS fully-editable template, click HERE
Disclaimer: I did not receive a complimentary copy of this assessment for review nor have I received any encouragement or compensation from either Brookes Publishing or any of the TILLS developers to write it. All images of this test are direct property of Brookes Publishing (when clicked on all the images direct the user to the Brookes Publishing website) and were used in this post for illustrative purposes only.
References:
Leclercq A, Maillart C, Majerus S. (2013) Nonword repetition problems in children with SLI: A deficit in accessing long-term linguistic representations? Topics in Language Disorders. 33 (3) 238-254.
Related Posts:
- Components of Comprehensive Dyslexia Testing: Part I- Introduction and Language Testing
- Part II: Components of Comprehensive Dyslexia Testing – Phonological Awareness and Word Fluency Assessment
- Part III: Components of Comprehensive Dyslexia Testing – Reading Fluency and Reading Comprehension
- Part IV: Components of Comprehensive Dyslexia Testing – Writing and Spelling
- Special Education Disputes and Comprehensive Language Testing: What Parents, Attorneys, and Advocates Need to Know
- Why (C) APD Diagnosis is NOT Valid!
- What Are Speech Pathologists To Do If the (C)APD Diagnosis is NOT Valid?
- What do Auditory Memory Deficits Indicate in the Presence of Average General Language Scores?
- Why Are My Child’s Test Scores Dropping?
- Comprehensive Assessment of Adolescents with Suspected Language and Literacy Disorders
If It’s NOT CAPD Then Where do SLPs Go From There?
In July 2015 I wrote a blog post entitled: “Why (C) APD Diagnosis is NOT Valid!” citing the latest research literature to explain that the controversial diagnosis of (C)APD tends to
a) detract from understanding that the child presents with legitimate language based deficits in the areas of comprehension, expression, social communication and literacy development
b) may result in the above deficits not getting adequately addressed due to the provision of controversial APD treatments
To CLARIFY, I was NOT trying to disprove that the processing deficits exhibited by the children diagnosed with “(C)APD” were not REAL. Rather I was trying to point out that these processing deficits are of neurolinguistic origin and as such need to be addressed from a linguistic rather than ‘auditory’ standpoint.
In other words, if one carefully analyzes the child’s so-called processing issues, one will quickly realize that those issues are not related to the processing of auditory input (auditory domain) since the child is not processing tones, hoots, or clicks, etc. but rather has difficulty processing speech and language (linguistic domain). Continue reading If It’s NOT CAPD Then Where do SLPs Go From There?
What’s Memes Got To Do With It?
Today, after a long hiatus, I am continuing my series of blog posts on “Scholars Who do Not Receive Enough Mainstream Exposure” by summarizing select key points from Dr. Alan G. Kamhi’s 2004 article: “A Meme’s Eye View of Speech-Language Pathology“.
Some of you may be wondering: “Why is she reviewing an article that is more than a decade old? The answer is simple. It is just as relevant, if not more so today, as it was 12 years ago, when it first came out.
In this article, Dr. Kamhi, asks a provocative question: “Why do some terms, labels, ideas, and constructs [in the field of speech pathology] prevail whereas others fail to gain acceptance?“
He attempts to answer this question by explaining the vital role the concept of memes play in the evolution and spread of ideas.
A meme (shortened from the Greek mimeme to imitate) is “an idea, behavior, or style that spreads from person to person within a culture”. The term was originally coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976) to explain the spread of ideas and cultural phenomena such as tunes, ideas, catchphrases, customs, etc.
‘Selfish’ in this case means that memes “care only about their own self-replication“. Consequently, “successful memes are those that get copied accurately (fidelity), have many copies (fecundity), and last a long time (longevity).” Therefore, “memes that are easy to understand, remember, and communicate to others” have the highest risk of survival and replication (pp. 105-106).
So what were some of the more successful memes which Dr. Kamhi identified in his article, which still persist more than a decade later?
- Learning Disability
- Auditory Processing Disorder
- Sensory Integration Disorder
- Dyslexia
- Articulation disorder
- Speech Therapist/ Pathologist
Interestingly the losers of the “contest” were memes that contained the word language in it:
- Language disorder
- Language learning disability
- Speech-language pathologist (albeit this term has gained far more acceptance in the past decade)
Dr. Kamhi further asserts that ‘language-based disorders have failed to become a recognizable learning problem in the community at large‘ (p.106).
So why are labels with the words ‘language’ NOT successful memes?
According to Dr. Kamhi that is because “language-based disorders must be difficult to understand, remember, and communicate to others“. Professional (SLP) explanations of what constitutes language are lengthy and complex (e.g., ASHA’s comprehensive definition) and as a result are not frequently applied in clinical practice, even when its aspects are familiar to SLPs.
Some scholars have suggested that the common practice of evaluating language with standardized language tools, restricts full understanding of the interactions of all of its domains (“within larger sociocultural context“) because they only examine isolated aspects of language. (Apel, 1999)
Dr. Kamhi, in turn explains this within the construct of the memetic theory: namely “simple constructs are more likely to replicate than complex ones.” In other words: “even professionals who understand language may have difficulty communicating its meaning to others and applying this meaning to clinical practice” (p. 107).
Let’s talk about the parents who are interested in learning the root-cause of their child’s difficulty learning and using language. Based on specific child’s genetic and developmental background as well as presenting difficulties, an educated clinician can explain to the parent the multifactorial nature of their child’s deficits.
However, these informed but frequently complex explanations are certainly in no way simplistic. As a result, many parents will still attempt to seek other professionals who can readily provide them with a “straightforward explanation” of their child’s difficulty. Since parents are “ultimately interested in finding the most effective and efficient treatment for their children” it makes sense to believe/hope that “the professional who knows the cause of the problem will also know the most effective way to treat it“(p. 107).
This brings us back to the concept of successful memes such as Auditory Processing Disorder (C/APD) as well as Sensory Processing Disorder (SPD) as isolated diagnoses.
Here are just some of the reasons behind their success:
- They provide a simple solution (which is not necessarily a correct one) that “the learning problem is the result of difficulty processing auditory information or difficulty integrating sensory information“.
- The assumption is “improving auditory processing and sensory integration abilities” will improve learning difficulties
- Both, “APD and SID each have only one cause“, so “finding an appropriate treatment …seems more feasible because there is only one problem to eliminate“
- Gives parents “a sense of relief” that they finally have an “understandable explanation for what is wrong with their child“
- Gives parents hope that the “diagnosis will lead to successful remediation of the learning problem“
For more information on why APD and SPD are not valid stand-alone diagnoses please see HERE and HERE respectively.
A note on the lack of success of “phonological” memes:
- They are difficult to understand and explain (especially due to a lack of consensus of what constitutes a phonological disorder)
- Lack of familiarity with the term ‘phonological’ results in poor comprehension of “phonological bases of reading problems“ since its “much easier to associate reading with visual processing abilities, good instruction, and a literacy rich environment” (p. 108).
Let’s talk about MEMEPLEXES (Blackmore, 1999) or what occurs when “nonprofessionals think they know how children learn language and the factors that affect language learning“ (Kamhi, 2004, p.108).
A memplex is a group of memes, which become much more memorable to individuals (can replicate more efficiently) as a team vs. in isolation.
Why is APD Memeplex So Appealing?
According to Dr. Kamhi, if one believes that ‘a) sounds are the building blocks of speech and language and (b) children learn to talk by stringing together sounds and constructing meanings out of strings of sounds’ (both wrong assumptions) then its quite a simple leap to make with respect to the following fallacies:
- Auditory processing are not influenced by language knowledge
- You can reliably discriminate between APD and language deficits
- You can validly and reliably assess “uncontaminated” auditory processing abilities and thus diagnose stand-alone APD
- You can target auditory abilities in isolation without targeting language
- Improvements in discrimination and identification of ‘speech sounds will lead to improvements in speech and language abilities‘
For more detailed information, why the above is incorrect, click: HERE
On the success of the Dyslexia Meme:
- Most nonprofessionals view dyslexia as visually based “reading problem characterized by letter reversals and word transpositions that affects bright children and adults“
- Its highly appealing due to the simple nature of its diagnosis (high intelligence and poor reading skills)
- ‘The diagnosis of dyslexia has historically been made by physicians and psychologists rather than educators‘, which makes memetic replication highly successful
- The ‘dyslexic’ label is far more appealing and desirable than calling self ‘reading disabled’
For more detailed information, why the above is far too simplistic of an explanation, click: HERE and HERE.
Final Thoughts:
As humans we engage in transmission of ideas (good and bad) on constant basis. The popularity of powerful social media tools such as Facebook and Twitter ensure their instantaneous and far reaching delivery and impact. However, “our processing limitations, cultural biases, personal preferences, and human nature make us more susceptible to certain ideas than to others (p. 110).”
As professionals it is important that we use evidence based practices and the latest research to evaluate all claims pertaining to assessment and treatment of language based disorders. However, as Dr. Kamhi points out (p.110):
- “Competing theories may be supported by different bodies of evidence, and the same evidence may be used to support competing theories.”
- “Reaching a scientific consensus also takes time.”
While these delays may play a negligible role when it comes to scientific research, they pose a significant problem for parents, teachers and health professionals who are seeking to effectively assist these youngsters on daily basis. Furthermore, even when select memes such as APD are beneficial because they allow for a delivery of services to a student who may otherwise be ineligible to receive them, erroneous intervention recommendations (e.g., working on isolated auditory discrimination skills) may further delay the delivery of appropriate and targeted intervention services.
So what are SLPs to do in the presence of persistent erroneous memes?
“Spread our language-based memes to all who will listen” (Kamhi, 2004, 110) of course! Since we are the professionals whose job is to treat any difficulties involving words. Consequently, our scope of practice certainly includes assessment, diagnosis and treatment of children and adults with speaking, listening, reading, writing, and spelling difficulties.
As for myself, I intend to start that task right now by hitting the ‘publish’ button on this post!
References:
Kamhi, A. (2004). A meme’s eye view of speech-language pathology. [PDF] Language, Speech, and Hearing Services in Schools, 35, 105-112.
Language Processing Deficits (LPD) Checklist for School Aged Children
Need a Language Processing Deficits Checklist for School Aged Children
You can find it in my online store HERE
This checklist was created to assist speech-language pathologists (SLPs) with figuring out whether the student presents with language processing deficits which require further follow-up (e.g., screening, comprehensive assessment). The SLP should provide this form to both teacher and caregiver/s to fill out to ensure that the deficit areas are consistent across all settings and people.
Checklist Categories:
- Listening Skills and Short Term Memory
- Verbal Expression
- Emergent Reading/Phonological Awareness
- General Organizational Abilities
- Social-Emotional Functioning
- Behavior
- Supplemental* Caregiver/Teacher Data Collection Form
- Select assessments sensitive to Auditory Processing Deficits