Posted on 1 Comment

Analyzing Discourse Abilities of Adolescents via Peer Conflict Resolution (PCR) Tasks

A substantial portion of my caseload is comprised of adolescent learners. Since standardized assessments possess significant limitations for that age group (as well as in general), I am frequently on the lookout for qualitative clinical measures that can accurately capture their abilities in the areas of discourse, critical thinking, and social communication.

One type of an assessment that I find particularly valuable for this age group is a set of two Peer Conflict Resolution Tasks. First described in a 2007 article by Dr. Marylin Nippold and her colleagues, they assess expository discourse of adolescent learners. Continue reading Analyzing Discourse Abilities of Adolescents via Peer Conflict Resolution (PCR) Tasks

Posted on 7 Comments

Help, My Student has a Huge Score Discrepancy Between Tests and I Don’t Know Why?

Here’s a  familiar scenario to many SLPs. You’ve administered several standardized language tests to your student (e.g., CELF-5 & TILLS). You expected to see roughly similar scores across tests. Much to your surprise, you find that while your student attained somewhat average scores on one assessment, s/he had completely bombed the second assessment, and you have no idea why that happened.

So you go on social media and start crowdsourcing for information from a variety of SLPs located in a variety of states and countries in order to figure out what has happened and what you should do about this. Of course, the problem in such situations is that while some responses will be spot on, many will be utterly inappropriate. Luckily, the answer lies much closer than you think, in the actual technical manual of the administered tests.

So what is responsible for such as drastic discrepancy?  A few things actually. For starters, unless both tests were co-normed (used the same sample of test takers) be prepared to see disparate scores due to the ability levels of children in the normative groups of each test.  Another important factor involved in the score discrepancy is how accurately does the test differentiate disordered children from typical functioning ones.

Let’s compare two actual language tests to learn more. For the purpose of this exercise let us select The Clinical Evaluation of Language Fundamentals-5 (CELF-5) and the Test of Integrated Language and Literacy (TILLS).   The former is a very familiar entity to numerous SLPs, while the latter is just coming into its own, having been released in the market only several years ago.

Both tests share a number of similarities. Both were created to assess the language abilities of children and adolescents with suspected language disorders. Both assess aspects of language and literacy (albeit not to the same degree nor with the same level of thoroughness).  Both can be used for language disorder classification purposes, or can they?

Actually, my last statement is rather debatable.  A careful perusal of the CELF – 5 reveals that its normative sample of 3000 children included a whopping 23% of children with language-related disabilities. In fact, the folks from the Leaders Project did such an excellent and thorough job reviewing its psychometric properties rather than repeating that information, the readers can simply click here to review the limitations of the CELF – 5 straight on the Leaders Project website.  Furthermore, even the CELF – 5 developers themselves have stated that: “Based on CELF-5 sensitivity and specificity values, the optimal cut score to achieve the best balance is -1.33 (standard score of 80). Using a standard score of 80 as a cut score yields sensitivity and specificity values of .97.

In other words, obtaining a standard score of 80 on the CELF – 5 indicates that a child presents with a language disorder. Of course, as many SLPs already know, the eligibility criteria in the schools requires language scores far below that in order for the student to qualify to receive language therapy services.

In fact, the test’s authors are fully aware of that and acknowledge that in the same document. “Keep in mind that students who have language deficits may not obtain scores that qualify him or her for placement based on the program’s criteria for eligibility. You’ll need to plan how to address the student’s needs within the framework established by your program.”

But here is another issue – the CELF-5 sensitivity group included only a very small number of: “67 children ranging from 5;0 to 15;11”, whose only requirement was to score 1.5SDs < mean “on any standardized language test”.  As the Leaders Project reviewers point out: “This means that the 67 children in the sensitivity group could all have had severe disabilities. They might have multiple disabilities in addition to severe language disorders including severe intellectual disabilities or Autism Spectrum Disorder making it easy for a language disorder test to identify this group as having language disorders with extremely high accuracy. ” (pgs. 7-8)

Of course, this begs the question,  why would anyone continue to administer any test to students, if its administration A. Does not guarantee disorder identification B. Will not make the student eligible for language therapy despite demonstrated need?

The problem is that even though SLPs are mandated to use a variety of quantitative clinical observations and procedures in order to reliably qualify students for services, standardized tests still carry more value then they should.  Consequently,  it is important for SLPs to select the right test to make their job easier.

The TILLS is a far less known assessment than the CELF-5 yet in the few years it has been out on the market it really made its presence felt by being a solid assessment tool due to its valid and reliable psychometric properties. Again, the venerable Dr. Carol Westby had already done such an excellent job reviewing its psychometric properties that I will refer the readers to her review here, rather than repeating this information as it will not add anything new on this topic. The upshot of her review as follows: “The TILLS does not include children and adolescents with language/literacy impairments (LLIs) in the norming sample. Since the 1990s, nearly all language assessments have included children with LLIs in the norming sample. Doing so lowers overall scores, making it more difficult to use the assessment to identify students with LLIs. (pg. 11)”

Now, here many proponents of inclusion of children with language disorders in the normative sample will make a variation of the following claim: “You CANNOT diagnose a language impairment if children with language impairment were not included in the normative sample of that assessment!Here’s a major problem with such assertion. When a child is referred for a language assessment, we really have no way of knowing if this child has a language impairment until we actually finish testing them. We are in fact attempting to confirm or refute this fact, hopefully via the use of reliable and valid testing. However, if the normative sample includes many children with language and learning difficulties, this significantly affects the accuracy of our identification, since we are interested in comparing this child’s results to typically developing children and not the disordered ones, in order to learn if the child has a disorder in the first place.  As per Peña, Spaulding and Plante (2006), “the inclusion of children with disabilities may be at odds with the goal of classification, typically the primary function of the speech pathologist’s assessment. In fact, by including such children in the normative sample, we may be “shooting ourselves in the foot” in terms of testing for the purpose of identifying disorders.”(p. 248)

Then there’s a variation of this assertion, which I have seen in several Facebook groups: “Children with language disorders score at the low end of normal distribution“.  Once again such assertion is incorrect since Spaulding, Plante & Farinella (2006) have actually shown that on average, these kids will score at least 1.28 SDs below the mean, which is not the low average range of normal distribution by any means.  As per authors: “Specific data supporting the application of “low score” criteria for the identification of language impairment is not supported by the majority of current commercially available tests. However, alternate sources of data (sensitivity and specificity rates) that support accurate identification are available for a subset of the available tests.” (p. 61)

Now, let us get back to your child in question, who performed so differently on both of the administered tests. Given his clinically observed difficulties, you fully expected your testing to confirm it. But you are now more confused than before. Don’t be! Search the technical manual for information on the particular test’s sensitivity and specificity to look up the numbers.   Vance and Plante (1994) put forth the following criteria for accurate identification of a disorder (discriminant accuracy): “90% should be considered good discriminant accuracy; 80% to 89% should be considered fair. Below 80%, misidentifications occur at unacceptably high rates” and leading to “serious social consequences” of misidentified children. (p. 21)

Review the sensitivity and specificity of your test/s, take a look at the normative samples, see if anything unusual jumps out at you, which leads you to believe that the administered test may have some issues with assessing what it purports to assess. Then, after supplementing your standardized testing results with good quality clinical data (e.g., narrative samples, dynamic assessment tasks, etc.), consider creating a solidly referenced purchasing pitch to your administration to invest in more valid and reliable standardized tests.

Hope you find this information helpful in your quest to better serve the clients on your caseload. If you are interested in learning more regarding evidence-based assessment practices as well as psychometric properties of various standardized speech-language tests visit the SLPs for Evidence-Based Practice  group on Facebook learn more.

References:

Posted on 5 Comments

It’s All Due to …Language: How Subtle Symptoms Can Cause Serious Academic Deficits

Scenario: Len is a 7-2-year-old, 2nd-grade student who struggles with reading and writing in the classroom. He is very bright and has a high average IQ, yet when he is speaking he frequently can’t get his point across to others due to excessive linguistic reformulations and word-finding difficulties. The problem is that Len passed all the typical educational and language testing with flying colors, receiving average scores across the board on various tests including the Woodcock-Johnson Fourth Edition (WJ-IV) and the Clinical Evaluation of Language Fundamentals-5 (CELF-5). Stranger still is the fact that he aced Comprehensive Test of Phonological Processing, Second Edition (CTOPP-2), with flying colors, so he is not even eligible for a “dyslexia” diagnosis. Len is clearly struggling in the classroom with coherently expressing self, telling stories, understanding what he is reading, as well as putting his thoughts on paper. His parents have compiled impressively huge folders containing examples of his struggles. Yet because of his performance on the basic standardized assessment batteries, Len does not qualify for any functional assistance in the school setting, despite being virtually functionally illiterate in second grade.

The truth is that Len is quite a familiar figure to many SLPs, who at one time or another have encountered such a student and asked for guidance regarding the appropriate accommodations and services for him on various SLP-geared social media forums. But what makes Len such an enigma, one may inquire? Surely if the child had tangible deficits, wouldn’t standardized testing at least partially reveal them?

Well, it all depends really, on what type of testing was administered to Len in the first place. A few years ago I wrote a post entitled: “What Research Shows About the Functional Relevance of Standardized Language Tests“.  What researchers found is that there is a “lack of a correlation between frequency of test use and test accuracy, measured both in terms of sensitivity/specificity and mean difference scores” (Betz et al, 2012, 141). Furthermore, they also found that the most frequently used tests were the comprehensive assessments including the Clinical Evaluation of Language Fundamentals and the Preschool Language Scale as well as one-word vocabulary tests such as the Peabody Picture Vocabulary Test”. Most damaging finding was the fact that: “frequently SLPs did not follow up the comprehensive standardized testing with domain-specific assessments (critical thinking, social communication, etc.) but instead used the vocabulary testing as a second measure”.(Betz et al, 2012, 140)

In other words, many SLPs only use the tests at hand rather than the RIGHT tests aimed at identifying the student’s specific deficits. But the problem doesn’t actually stop there. Due to the variation in psychometric properties of various tests, many children with language impairment are overlooked by standardized tests by receiving scores within the average range or not receiving low enough scores to qualify for services.

Thus, “the clinical consequence is that a child who truly has a language impairment has a roughly equal chance of being correctly or incorrectly identified, depending on the test that he or she is given.” Furthermore, “even if a child is diagnosed accurately as language impaired at one point in time, future diagnoses may lead to the false perception that the child has recovered, depending on the test(s) that he or she has been given (Spaulding, Plante & Farinella, 2006, 69).”

There’s of course yet another factor affecting our hypothetical client and that is his relatively young age. This is especially evident with many educational and language testing for children in the 5-7 age group. Because the bar is set so low, concept-wise for these age-groups, many children with moderate language and literacy deficits can pass these tests with flying colors, only to be flagged by them literally two years later and be identified with deficits, far too late in the game.  Coupled with the fact that many SLPs do not utilize non-standardized measures to supplement their assessments, Len is in a pretty serious predicament.

But what if there was a do-over? What could we do differently for Len to rectify this situation? For starters, we need to pay careful attention to his deficits profile in order to choose appropriate tests to evaluate his areas of needs. The above can be accomplished via a number of ways. The SLP can interview Len’s teacher and his caregiver/s in order to obtain a summary of his pressing deficits. Depending on the extent of the reported deficits the SLP can also provide them with a referral checklist to mark off the most significant areas of need.

In Len’s case, we already have a pretty good idea regarding what’s going on. We know that he passed basic language and educational testing, so in the words of Dr. Geraldine Wallach, we need to keep “peeling the onion” via the administration of more sensitive tests to tap into Len’s reported areas of deficits which include: word-retrieval, narrative production, as well as reading and writing.

For that purpose, Len is a good candidate for the administration of the Test of Integrated Language and Literacy (TILLS), which was developed to identify language and literacy disorders, has good psychometric properties, and contains subtests for assessment of relevant skills such as reading fluency, reading comprehension, phonological awareness,  spelling, as well as writing  in school-age children.

Given Len’s reported history of narrative production deficits, Len is also a good candidate for the administration of the Social Language Development Test Elementary (SLDTE). Here’s why. Research indicates that narrative weaknesses significantly correlate with social communication deficits (Norbury, Gemmell & Paul, 2014). As such, it’s not just children with Autism Spectrum Disorders who present with impaired narrative abilities. Many children with developmental language impairment (DLD) (#devlangdis) can present with significant narrative deficits affecting their social and academic functioning, which means that their social communication abilities need to be tested to confirm/rule out presence of these difficulties.

However, standardized tests are not enough, since even the best-standardized tests have significant limitations. As such, several non-standardized assessments in the areas of narrative production, reading, and writing, may be recommended for Len to meaningfully supplement his testing.

Let’s begin with an informal narrative assessment which provides detailed information regarding microstructural and macrostructural aspects of storytelling as well as child’s thought processes and socio-emotional functioning. My nonstandardized narrative assessments are based on the book elicitation recommendations from the SALT website. For 2nd graders, I use the book by Helen Lester entitled Pookins Gets Her Way. I first read the story to the child, then cover up the words and ask the child to retell the story based on pictures. I read the story first because: “the model narrative presents the events, plot structure, and words that the narrator is to retell, which allows more reliable scoring than a generated story that can go in many directions” (Allen et al, 2012, p. 207).

As the child is retelling his story I digitally record him using the Voice Memos application on my iPhone, for a later transcription and thorough analysis.  During storytelling, I only use the prompts: ‘What else can you tell me?’ and ‘Can you tell me more?’ to elicit additional information. I try not to prompt the child excessively since I am interested in cataloging all of his narrative-based deficits. After I transcribe the sample, I analyze it and make sure that I include the transcription and a detailed write-up in the body of my report, so parents and professionals can see and understand the nature of the child’s errors/weaknesses.

Now we are ready to move on to a brief nonstandardized reading assessment. For this purpose, I often use the books from the Continental Press series entitled: Reading for Comprehension, which contains books for grades 1-8.  After I confirm with either the parent or the child’s teacher that the selected passage is reflective of the complexity of work presented in the classroom for his grade level, I ask the child to read the text.  As the child is reading, I calculate the correct number of words he reads per minute as well as what type of errors the child is exhibiting during reading.  Then I ask the child to state the main idea of the text, summarize its key points as well as define select text embedded vocabulary words and answer a few, verbally presented reading comprehension questions. After that, I provide the child with accompanying 5 multiple choice question worksheet and ask the child to complete it. I analyze my results in order to determine whether I have accurately captured the child’s reading profile.

Finally, if any additional information is needed, I administer a nonstandardized writing assessment, which I base on the Common Core State Standards for 2nd grade. For this task, I provide a student with a writing prompt common for second grade and give him a period of 15-20 minutes to generate a writing sample. I then analyze the writing sample with respect to contextual conventions (punctuation, capitalization, grammar, and syntax) as well as story composition (overall coherence and cohesion of the written sample).

The above relatively short assessment battery (2 standardized tests and 3 informal assessment tasks) which takes approximately 2-2.5 hours to administer, allows me to create a comprehensive profile of the child’s language and literacy strengths and needs. It also allows me to generate targeted goals in order to begin effective and meaningful remediation of the child’s deficits.

Children like Len will, unfortunately, remain unidentified unless they are administered more sensitive tasks to better understand their subtle pattern of deficits. Consequently, to ensure that they do not fall through the cracks of our educational system due to misguided overreliance on a limited number of standardized assessments, it is very important that professionals select the right assessments, rather than the assessments at hand, in order to accurately determine the child’s areas of needs.

References:

Posted on 4 Comments

Improving Executive Function Skills of Language Impaired Students with Hedbanz

Those of you who have previously read my blog know that I rarely use children’s games to address language goals.  However, over the summer I have been working on improving executive function abilities (EFs) of some of the language impaired students on my caseload. As such, I found select children’s games to be highly beneficial for improving language-based executive function abilities.

For those of you who are only vaguely familiar with this concept, executive functions are higher level cognitive processes involved in the inhibition of thought, action, and emotion, which located in the prefrontal cortex of the frontal lobe of the brain. The development of executive functions begins in early infancy; but it can be easily disrupted by a number of adverse environmental and organic experiences (e.g., psychosocial deprivation, trauma).  Furthermore, research in this area indicates that the children with language impairments present with executive function weaknesses which require remediation.

EF components include working memory, inhibitory control, planning, and set-shifting.

  • Working memory
    • Ability to store and manipulate information in mind over brief periods of time
  • Inhibitory control
    • Suppressing responses that are not relevant to the task
  • Set-shifting
    • Ability to shift behavior in response to changes in tasks or environment

Simply put, EFs contribute to the child’s ability to sustain attention, ignore distractions, and succeed in academic settings. By now some of you must be wondering: “So what does Hedbanz have to do with any of it?”

Well, Hedbanz is a quick-paced multiplayer  (2-6 people) game of “What Am I?” for children ages 7 and up.  Players get 3 chips and wear a “picture card” in their headband. They need to ask questions in rapid succession to figure out what they are. “Am I fruit?” “Am I a dessert?” “Am I sports equipment?” When they figure it out, they get rid of a chip. The first player to get rid of all three chips wins.

The game sounds deceptively simple. Yet if any SLPs or parents have ever played that game with their language impaired students/children as they would be quick to note how extraordinarily difficult it is for the children to figure out what their card is. Interestingly, in my clinical experience, I’ve noticed that it’s not just moderately language impaired children who present with difficulty playing this game. Even my bright, average intelligence teens, who have passed vocabulary and semantic flexibility testing (such as the WORD Test 2-Adolescent or the  Vocabulary Awareness subtest of the Test of Integrated Language and Literacy ) significantly struggle with their language organization when playing this game.

So what makes Hedbanz so challenging for language impaired students? Primarily, it’s the involvement and coordination of the multiple executive functions during the game. In order to play Hedbanz effectively and effortlessly, the following EF involvement is needed:

  • Task Initiation
    • Students with executive function impairments will often “freeze up” and as a result may have difficulty initiating the asking of questions in the game because many will not know what kind of questions to ask, even after extensive explanations and elaborations by the therapist.
  • Organization
    • Students with executive function impairments will present with difficulty organizing their questions by meaningful categories and as a result will frequently lose their track of thought in the game.
  • Working Memory
    • This executive function requires the student to keep key information in mind as well as keep track of whatever questions they have already asked.
  • Flexible Thinking
    • This executive function requires the student to consider a situation from multiple angles in order to figure out the quickest and most effective way of arriving at a solution. During the game, students may present with difficulty flexibly generating enough organizational categories in order to be effective participants.
  • Impulse Control
    • Many students with difficulties in this area may blurt out an inappropriate category or in an appropriate question without thinking it through first.
      • They may also present with difficulty set-shifting. To illustrate, one of my 13-year-old students with ASD, kept repeating the same question when it was his turn, despite the fact that he was informed by myself as well as other players of the answer previously.
  • Emotional Control
    • This executive function will help students with keeping their emotions in check when the game becomes too frustrating. Many students of difficulties in this area will begin reacting behaviorally when things don’t go their way and they are unable to figure out what their card is quickly enough. As a result, they may have difficulty mentally regrouping and reorganizing their questions when something goes wrong in the game.
  • Self-Monitoring
    • This executive function allows the students to figure out how well or how poorly they are doing in the game. Students with poor insight into own abilities may present with difficulty understanding that they are doing poorly and may require explicit instruction in order to change their question types.
  • Planning and Prioritizing
    • Students with poor abilities in this area will present with difficulty prioritizing their questions during the game.

Consequently, all of the above executive functions can be addressed via language-based goals.  However, before I cover that, I’d like to review some of my session procedures first.

Typically, long before game initiation, I use the cards from the game to prep the students by teaching them how to categorize and classify presented information so they effectively and efficiently play the game.

Rather than using the “tip cards”, I explain to the students how to categorize information effectively.

This, in turn, becomes a great opportunity for teaching students relevant vocabulary words, which can be extended far beyond playing the game.

I begin the session by explaining to the students that pretty much everything can be roughly divided into two categories animate (living) or inanimate (nonliving) things. I explain that humans, animals, as well as plants belong to the category of living things, while everything else belongs to the category of inanimate objects. I further divide the category of inanimate things into naturally existing and man-made items. I explain to the students that the naturally existing category includes bodies of water, landmarks, as well as things in space (moon, stars, sky, sun, etc.). In contrast, things constructed in factories or made by people would be example of man-made objects (e.g., building, aircraft, etc.)

When I’m confident that the students understand my general explanations, we move on to discuss further refinement of these broad categories. If a student determines that their card belongs to the category of living things, we discuss how from there the student can further determine whether they are an animal, a plant, or a human. If a student determined that their card belongs to the animal category, we discuss how we can narrow down the options of figuring out what animal is depicted on their card by asking questions regarding their habitat (“Am I a jungle animal?”), and classification (“Am I a reptile?”). From there, discussion of attributes prominently comes into play. We discuss shapes, sizes, colors, accessories, etc., until the student is able to confidently figure out which animal is depicted on their card.

In contrast, if the student’s card belongs to the inanimate category of man-made objects, we further subcategorize the information by the object’s location (“Am I found outside or inside?”; “Am I found in ___ room of the house?”, etc.), utility (“Can I be used for ___?”), as well as attributes (e.g., size, shape, color, etc.)

Thus, in addition to improving the students’ semantic flexibility skills (production of definitions, synonyms, attributes, etc.) the game teaches the students to organize and compartmentalize information in order to effectively and efficiently arrive at a conclusion in the most time expedient fashion.

Now, we are ready to discuss what type of EF language-based goals, SLPs can target by simply playing this game.

1. Initiation: Student will initiate questioning during an activity in __ number of instances per 30-minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

2. Planning: Given a specific routine, student will verbally state the order of steps needed to complete it with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

3. Working Memory: Student will repeat clinician provided verbal instructions pertaining to the presented activity, prior to its initiation, with 80% accuracy  given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

4. Flexible Thinking: Following a training by the clinician, student will generate at least __ questions needed for task completion (e.g., winning the game) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

5. Organization: Student will use predetermined written/visual cues during an activity to assist self with organization of information (e.g., questions to ask) with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

6. Impulse Control: During the presented activity the student will curb blurting out inappropriate responses (by silently counting to 3 prior to providing his response) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

7. Emotional Control: When upset, student will verbalize his/her frustration (vs. behavioral activing out) in __ number of instances per 30 minute session given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

8. Self-Monitoring:  Following the completion of an activity (e.g., game) student will provide insight into own strengths and weaknesses during the activity (recap) by verbally naming the instances in which s/he did well, and instances in which s/he struggled with __% accuracy given (maximal, moderate, minimal) type of  ___  (phonemic, semantic, etc.) prompts and __ (visual, gestural, tactile, etc.) cues by the clinician.

There you have it. This one simple game doesn’t just target a plethora of typical expressive language goals. It can effectively target and improve language-based executive function goals as well. Considering the fact that it sells for approximately $12 on Amazon.com, that’s a pretty useful therapy material to have in one’s clinical tool repertoire. For fancier versions, clinicians can use “Jeepers Peepers” photo card sets sold by Super Duper Inc. Strapped for cash, due to highly limited budget? You can find plenty of free materials online if you simply input “Hedbanz cards” in your search query on Google. So have a little fun in therapy, while your students learn something valuable in the process and play Hedbanz today!

Related Smart Speech Therapy Resources:

Posted on 5 Comments

APD Update: New Developments on an Old Controversy

In the past two years, I wrote a series of research-based posts (HERE and HERE) regarding the validity of (Central) Auditory Processing Disorder (C/APD) as a standalone diagnosis as well as questioned the utility of it for classification purposes in the school setting.

Once again I want to reiterate that I was in no way disputing the legitimate symptoms (e.g., difficulty processing language, difficulty organizing narratives, difficulty decoding text, etc.), which the students diagnosed with “CAPD” were presenting with.

Rather, I was citing research to indicate that these symptoms were indicative of broader linguistic-based deficits, which required targeted linguistic/literacy-based interventions rather than recommendations for specific prescriptive programs (e.g., CAPDOTS, Fast ForWord, etc.),  or mere accommodations.

I was also significantly concerned that overfocus on the diagnosis of (C)APD tended to obscure REAL, language-based deficits in children and forced SLPs to address erroneous therapeutic targets based on AuD recommendations or restricted them to a receipt of mere accommodations rather than rightful therapeutic remediation. Continue reading APD Update: New Developments on an Old Controversy

Posted on Leave a comment

Teaching Punctuation for Writing Success

Child, Kid, Play, Tranquil, Study, Color, Write, LearnLast week  I wrote a blog post entitled: “Teaching Metalinguistic Vocabulary for Reading Success” in which I described the importance of explicitly teaching students basic metalinguistic vocabulary terms as elementary building blocks needed for reading success (HERE).  This week I wanted to write a brief blog post regarding terminology related to one particular, often ignored aspect of writing, punctuation.

Punctuation brings written words to life. As we have seen from countless of grammar memes, an error in punctuation results in conveying a completely different meaning.

In my experience administering the Test of Written Language – 4 (TOWL – 4) as well as analyzing informal writing samples I frequently see an almost complete absence of any and all punctuation marks in the presented writing samples.  These are not the samples of 2nd, 3rd, or even 4th graders that I am referring to. Sadly, I’m referring to written samples of students in middle school and even high school, which frequently lack basic punctuation and capitalization.

This explicit instruction of punctuation terminology does significantly improve my students understanding of sentence formation. Even my students with mild to moderate intellectual disabilities significantly benefit from understanding how to use periods, commas and question marks in sentences.

I even created a basic handout to facilitate my students comprehension of usage of punctuation marks (FREE HERE) in sentences.

Similarly to my metalinguistic vocabulary handout, I ask my older elementary aged students with average IQ, to look up online and write down rules of usage for each of the provided terms (e.g., colon, hyphen, etc,.), under therapist supervision.

This in turns becomes a critical thinking and an executive functions activity. Students need sift through quite a bit of information to find a website which provides the clearest answers regarding the usage of specific punctuation marks. Here, it’s important for students to locate kid friendly websites which will provide them with simple but accurate descriptions of punctuation marks usage.  One example of such website is Enchanted Learning which also provides free worksheets related to practicing punctuation usage.

In contrast to the above, I use structured worksheets and punctuation related workbooks for younger elementary age students (e.g., 1st – 5th grades) as well as older students with intellectual impairments (click on each grade number above to see the workbooks).

I find that even after several sessions of explicitly teaching punctuation usage to my students, their written sentences significantly improve in clarity and cohesion.

One of the best parts about this seemingly simple activity, is that due to the sheer volume of provided punctuation mark vocabulary (20 items in total), a creative clinician/parent can stretch this activity into multiple therapy sessions. This is because careful rule identification for each punctuation mark will in turn involve a number of related vocabulary definition tasks.  Furthermore, correct usage of each punctuation mark in a sentence for internalization purposes (rather mere memorization) will also take-up a significant period of time.

How about you? Do you explicitly work on teaching punctuation?

Posted on 1 Comment

Teaching Metalinguistic Vocabulary for Reading Success

In my therapy sessions I spend a significant amount of time improving literacy skills (reading, spelling, and writing) of language impaired students.  In my work with these students I emphasize goals with a focus on phonics, phonological awareness, encoding (spelling) etc. However, what I have frequently observed in my sessions are significant gaps in the students’ foundational knowledge pertaining to the basics of sound production and letter recognition.  Basic examples of these foundational deficiencies involve students not being able to fluently name the letters of the alphabet, understand the difference between vowels and consonants, or fluently engage in sound/letter correspondence tasks (e.g., name a letter and then quickly and accurately identify which sound it makes).  Consequently, a significant portion of my sessions involves explicit instruction of the above concepts.

This got me thinking regarding my students’ vocabulary knowledge in general.  We, SLPs, spend a significant amount of time on explicit and systematic vocabulary instruction with our students because as compared to typically developing peers, they have immature and limited vocabulary knowledge. But do we teach our students the abstract vocabulary necessary for reading success? Do we explicitly teach them definitions of a letter, a word, a sentence? etc.

A number of my colleagues are skeptical. “Our students already have poor comprehension”, they tell me, “Why should we tax their memory with abstract words of little meaning to them?”  And I agree with them of course, but up to a point.

I agree that our students have working memory and processing speed deficits as a result of which they have a much harder time learning and recalling new words.

However, I believe that not teaching them meanings of select words pertaining to language is a huge disservice to them. Here is why. To be a successful communicator, speaker, reader, and writer, individuals need to possess adequate metalinguistic skills.

In simple terms “metalinguistics” refers to the individual’s ability to actively think about, talk about, and manipulate language. Reading, writing, and spelling require active level awareness and thought about language. Students with poor metalinguistic skills have difficulty learning to read, write, and spell.  They lack awareness that spoken words are made up of individual units of sound, which can be manipulated. They lack awareness that letters form words, words form phrases and sentences, and sentences form paragraphs. They may not understand that letters make sounds or that a word may consist of more letters than sounds (e.g., /ship/). The bottom line is that students with decreased metalinguistic skills cannot effectively use language to talk about concepts like sounds, letters, or words unless they are explicitly taught those abilities.

So I do! Furthermore, I can tell you that explicit instruction of metalinguistic vocabulary does significantly improve my students understanding of the tasks involved in obtaining literacy competence. Even my students with mild to moderate intellectual disabilities significantly benefit from understanding the meanings of: letters, words, sentences, etc.

I even created a basic abstract vocabulary handout to facilitate my students comprehension of these words (FREE HERE). While by no means exhaustive, it is a decent starting point for teaching my students the vocabulary needed to improve their metalinguistic skills.

For older elementary aged students with average IQ, I only provide the words I want them to define, and then ask them to look up their meanings online via the usage of PC or an iPad. This turns of vocabulary activity into a critical thinking and an executive functions task.

Students need to figure out the appropriate search string needed to in order to locate the answer as well as which definition comes the closest to clearly and effectively defining the presented word. One of the things I really like about Google online dictionary, is that it provides multiple definitions of the same words along with word origins. As a result, it teaches students to carefully review and reflect upon their selected definition in order to determine its appropriateness.

A word of caution as though regarding using Kiddle, Google-powered search engine for children. While it’s great for locating child friendly images, it is not appropriate for locating abstract definition of words. To illustrate, when you type in the string search into Google, “what is the definition of a letter?” You will get several responses which will appropriately match  some meanings of your query.  However the same string search in Kiddle, will merely yield helpful tips on writing a letter as well as images of envelopes with stamps affixed to them.

In contrast to the above, I use a more structured vocabulary defining activities for younger elementary age students as well as students with intellectual impairments. I provide simple definitions of abstract words, attach images and examples to each definition as well as create cloze activities and several choices of answers in order to ensure my students’ comprehension of these words.

I find that this and other metalinguistic activities significantly improve my students comprehension of abstract words such as ‘communication’, ‘language’, as well as ‘literacy’. They cease being mere buzzwords, frequently heard yet consistently not understood.  To my students these words begin to come to life, brim with meaning, and inspire numerous ‘aha’ moments.

Now that you’ve had a glimpse of my therapy sessions I’d love to have a glimpse of yours. What metalinguistic goals related to literacy are you targeting with your students? Comment below to let me know.