top of page
EBLI and Speech to Print

Three of the questions I most often receive are: 

  • What do I think of the EBLI reading program?

  • What do I think of a speech to print approach to phonics?

  • Do I think a speech to print approach is better than a traditional phonics approach?
     

Truthfully, I think these are all very difficult questions to answer. However, I do think it is important to try and present answers as these are such widely asked questions. But to better answer these questions, let's start by discussing what is EBLI and what is a speech to print approach to phonics. 

 

EBLI stands for Evidence-Based Literacy Instruction. It is a speech to print phonics program created by Nora Chahbazi and inspired by Diane McGuinness. The EBLI program includes explicit instruction in vocabulary, fluency, comprehension, handwriting, writing, and spelling. The EBLI program is widely popular and has especially seen a growth in enthusiasm with the recent SOR movement. 

 

Speech to Print, sometimes referred to as linguistic phonics, is an approach to phonics in which instructors teach the possible graphemes (letter or letters representing a sound) a phoneme (a single unit of sound) can make. Comparatively, in most other phonics programs, an instructor presents a grapheme and teaches the sound it can make. Over the years, to better understand this issue, I have interviewed Dr. Steve Truch, Nora Chahbazi, and Nicole Ott. 

 

As I see it, a speech to print approach is really trying to address the same key problem that Dr. Pete Bowers presents with his criticisms of a traditional phonics approach: The English language does not consistently represent phonemes the same way. Therefore an overly simplified phonics code might create more confusion. For example, the phoneme /sh/ can be spelled by 20 different ways as identified to me by Nora Chahbazi. This means that proficient reading requires a great deal of cognitive flexibility. Speech to Print approaches try to build this concept right from the beginning. 

 

In a traditional phonics lesson, an instructor might start with the letter <A> and ask the students what sound the letter <A> makes then have the students list <A> words. In a speech to print program like EBLI, “the instructor says ‘We are going to look at different spellings that represent the /ai/ sound.’ Then a sort is done where students discover a variety of ways to spell this sound, with teacher support and by segmenting the sounds as they match the spellings for the various words.” (Chahbazi, 2023). 

 

As Nora pointed out to me, there is a lot of confusion around the speech to print terminology. To quote Nicole Ott: 

“Speech to print and print to speech have two definitions in common use. One is a process, and the other is an instructional method. Unfortunately, this twofold definition leads to some interesting discussions because people talking about speech to print can be talking about completely different things.

 

Print to speech and speech to print are both a process and an instructional method.

All decent reading programs use both processes: print to speech (blending) and speech to print (segmenting).

 

Process:

Print to speech:

The process of print to speech is blending sounds into a word.

 

Speech to print:

The process of speech to print is segmenting a word into sounds and putting those sounds on paper in the form of a written word.

 

Instructional method:

Print to Speech

Traditional reading programs using the Orton Gillingham method and other OG-like programs are known as a print to speech instructional method. These methods focus on print first. They concentrate on teaching letter names and letter sounds in isolation first before a student begins to read or spell. They teach spelling rules, exceptions, syllable types, and syllable rules to move from the written word to speech. Although OG and OG-like programs are called print to speech, they use both processes of speech to print (segmenting) and print to speech (blending).

Speech to Print

 

Programs like EBLI, Reading Simplified, Sounds-Write and a few others are referred to as speech to print instructional methods. These programs are also known as linguistic phonics or structured linguistic literacy. These programs focus on speech first. There are no rules and no syllable types. A child does not need to know letter names or sounds to begin building (segmenting and writing) his first word. Segment sounds first, write sounds next, and blend sounds back into a word is a common process in speech to print programs. Although programs like EBLI are called speech to print, they use both processes of speech to print (segmenting) and print to speech (blending).

 

The first part of Louisa Moat's book, Speech to Print, centers around the process of speech to print, but she describes the print to speech instructional method in the last part of her book. While EBLI instructors use the information in Moat's book about the process of speech to print, we would never use her instructional method, print to speech, with all its rules, syllable types, and high cognitive load for our students.

 

EBLI uses both print to speech and speech to print processes in its instruction, but it is a true speech to print instructional method.”


 

Due to the confusion around the Speech to Print terminology, some have begun to refer to this approach as Structured Linguistics Literacy. In order for a program to be classified as Structured Linguistics Literacy, it must meet Diane McGuinness’s 10 principles of instruction which according to John Walker are: 

 

“1. “No sight words (except high frequency words with rare spellings).”[These might be words like ‘of’, in which the spelling < f > represents the sound /v/. ‘Of’ is the only word in the English language in which /v/ is spelt using the spelling < f >.]

 

2. “No letter names.” [Letter names can be useful as a shortcut once pupils understand that the sounds in the English language are represented by spellings. This would usually be by the end of the first year of school, when they have learned the Initial/Basic Code.]

 

3. “Sound-to-print orientation. Phonemes, not letters, are the basis of the code.” [Start by teaching that sounds in simple, CVC words can be represented by single-letter spellings. Humans have been speaking for, at the very least, 100,000 years; writing, on the other hand, was only invented about 5,000 years ago. See also a previous post on this.]

 

4. “Teach phonemes only and no other sound units.” [The English alphabet system is based on the individual sounds of the language, not on larger units, such as onsets and rhymes. If a child has been taught that /b/ and /l/ are written < b > and < l >, why on earth would they be taught ‘bl’ as well?]

 

5. “Begin with an artificial transparent alphabet or basic code: a one-to-one correspondence between 40 phonemes and their most common spelling.”[This is one of the most crucial points made by McGuinness. Young children can easily be taught that the sounds in our language are represented by single-letter spellings. They are easy to introduce, easy to remember when spelling, and lack the complexity of digraphs, trigraphs and multigraphs.]

 

6. “Teach children to identify and sequence sounds in real words by segmenting and blending, using letters.” [By ‘using letters’, what McGuinness means is segmenting and blending in the context of written words, rather than doing this orally.]

 

7. “Teach children how to write each letter. Integrate writing into every lesson.” [Children should be taught carefully how to form each letter, though not in their phonics lessons (cognitive overload). When teachers neglect to teach letter formation from the start, like anything, correcting a fault can be very, very hard, especially when it has been practised to semi-permanency.]

 

8. “Link writing, spelling, and reading to ensure that children learn that the alphabet is a code, and that the code works in both directions: encoding/decoding.” [Reading and writing are two sides of the same coin. Of course, from a psychological point of view, writing is more difficult than reading because it draws on recall memory, rather than recognition memory.]

 

9. “Spelling should be accurate or, at a minimum, phonetically accurate (all things within reason).” [Once it dawns on (even quite young) children that they can represent the sounds in words in writing, they start to try and write anything and everything. Error correction should always be based on what the children have already learnt. For example, ‘frog’ spelt as ‘fog’ would need to be corrected but ‘Queen’ spelt as ‘Quen’ would not unless and until the spellings of the sound /ee/ had been taught.]

 

10. “Lessons should move on to include the advanced spelling code (the 136 remaining common spellings and 80 sight words).” [This probably takes the average child about two more years to learn. However, once learnt, it provides a strong base for learning less frequently encountered spellings, which, if taught in context, are easy to add to the repertoire. ]”


 

As I see it, there are 5 advantages to this type of approach:: 

  1. The approach is more linguistically accurate

  2. The approach is faster

  3. The approach builds cognitive flexibility into the instructional model

  4. It connects phonemic awareness and letter sound knowledge, within the same lesson

  5. It connects decoding with reading, within the same lesson.

 

That said, I do see one possible theoretical disadvantage. By using the speech to print model, you are inherently teaching a far more complex, albeit accurate version of the phonetic code. While traditional phonics typically teaches a much simpler version of the code. Of course, the idea behind traditional phonics might not be to teach the entire phonics code but rather to teach a sort of “cheat sheet” to decoding, so as to quickly allow faster access to reading new words. I therefore wonder if the speech to print method is inherently more cognitively demanding. However, many scholars in the Speech to Print area seem to thoroughly object to this specific criticism. 

 

Nora Chahbazi, reached out to me to discuss this issue of the potential impact on cognitive load and said, “Cognitive load with EBLI and other Structured Linguistic Literacy/S2P/Linguistic Phonic programs actually LIGHTEN the cognitive load (a lot). The lack of phonics rules, syllable division rules, letter names, and more are what lighten the load.” She also referred me to an article by John Walker, creator of the Sounds-Write speech to print phonics program, which I will link in the references section. 

 

When I interviewed Nicole Ott, she presented me with a speech to print lesson which was admittedly, a humbling experience, as my knowledge of linguistics was far lower than hers, and I truly struggled. That said, I think she intentionally gave me a difficult lesson to model what the process would feel like as a student. After this lesson was done, Nora Chabazi told me, “With EBLI we always ensure struggle then provide guidance to correct errors or struggles. If the work is too easy or already known, we increase the difficulty of words used in instruction. Rather than being about knowledge, the lessons in SLL are about teaching and helping all learners apply the skills of PA and the processes to match speech sounds to letters/graphemes to, as quickly as possible, accurately and automatically read and spell-reading and spelling are taught at the same time as they are reversible.”  I must say, I like the speech to print approach. It addresses a difficult problem with the linguistics of the English language. The approach is also much more flexible and easy to adapt for older students unlike traditional phonics instruction. Of course, long time followers of this blog will know that I don’t place a great deal of value on qualitative arguments. But rather, I want to know what the experimental evidence shows.

 

Previously, when I have been asked about the speech to print approach, I have pointed people towards the studies on the program “Targeted Reading Intervention” (TRI), which is the experimental basis for the linguistics phonics program, Reading Simplified. Upon request, Marnie Ginsberg, the creator of Reading Simplified, sent me 5 high quality, TRI RCTs and 2 high quality quasi-experimental studies, all of which used standardized assessments, and collectively showed a mean effect size of .45. These studies were all on grades K-1 and had an average sample size of 418. I conducted an additional search for studies on TRI, using the Education Source database and found no additional studies. This effect size is ever so slightly, higher than the NRP mean for phonics, and my meta-analysis for structured literacy programs. However, typically higher quality studies show lower effect sizes and the TRI studies are some of the highest quality phonics studies I have read. Which is why it's one of the programs I would be most willing to endorse.

I previously conducted a systematic search for studies on the SPELL-Links program, another popular speech to print program. However, I found no studies. Recently, Jan Wasowicz, one of the creators of the program, reached out to me to share a study conducted in 2017, by Wanzek, et al. In this RCT 81 struggling students, in grade 1, received 6 weeks of instruction. The students showed a mean effect size of .78 on the Woodcock spelling test. Using both the TRI/Reading Simplified and SPELL-Links research, I compiled all of the experimental speech to print studies together to conduct a meta-analysis and found a mean unweighted effect size of .44, 95% CI = [.66, .22]. 

This research suggests strong experimental evidence for a speech to print approach to phonics. Back to EBLI, I previously passed on reviewing the EBLI program, because it did not have any experimental or quasi-experimental studies. It only had case studies. In a case study design, the authors typically compare the growth of a class, from the beginning of a year to the end of a year. This is problematic, because we expect students to learn across a year. In an experimental or quasi-experimental study, we have a control group and learning is measured in comparison to the control. By doing this we correct for some of the impact of time and measure the magnitude of effect, rather than the total growth in learning. I typically don't examine case studies, for this reason. However, ignoring case studies presents three serious research problems. 

 

  1. In my personal experience, most language program studies are case studies. So by excluding them, I exclude the majority of the research. 

  2. Experimental and quasi-experimental studies can be very expensive. For example, low cost, experimental studies that meet the WWC requirements can cost between $50-$300 000, according to the Institute of Education Sciences, 2013.This means that by excluding studies that don’t have an experimental design, I automatically bias my results against smaller companies.

  3. It means we cannot examine research questions that don’t have experimental studies, such as, what is the efficacy of EBLI.  

 

In the past lower quality meta-analyses often combined the effect sizes of case studies and experimental studies. This approach allows for a greater range of study samples and authors argued that studies on the higher end of the quality spectrum which typically show lower results would balance out the mean effect size. In other words, they theorized that a large enough number of studies corrected for most errors. This would make sense, if most pedagogies studied had a similar composition of studies, as the measuring stick would therefore be consistent. Of course, this is not really the case, and there is a wide range of study quality, across different topics. 

 

More recent meta-analysis authors have been increasingly moving to exclude the results of case studies and only including experimental studies. Indeed, this has been the methodology that I most typically take. However, another methodology would be to use tools like regression analysis and moderator analysis to separate out the effects of experimental designs and case study designs within a meta-analysis. This was the approach taken by both Fritton 2018, and D’Agostino 2017, the Frintton meta-analysis, actually being one of my two favorite meta-studies. Of course, even if we separate these effects out, it leaves the question how do we interpret the effect sizes of case studies, when we know they are both less accurate and inflated? In 2014, Plonsky, et al conducted a study of 346 primary studies and 91 meta-analyses to try and answer this question. They wrote the following interpretation guidelines for case studies: 

Comparatively, most researchers interpret effect sizes for experimental studies, based on Cohen’s guidelines, which can be seen in this chart:

While I have previously avoided reviewing case studies as much as possible, I think the Plonsky review provides an excellent tool for examining case studies. Moreover, I think for the reasons outlined above it is necessary to sometimes look at case study research. That said, I want to make two caveats. First, case studies don’t just produce higher effect sizes, they produce more random effect sizes, which is likely why Plonsky’s minimum non-negligible effect size is so high.Therefore, it does not make sense to include case studies, when there is already sufficient experimental research. 

 

Second, it does not make sense to include the unweighted mean effect size for a pedagogy, based on both case studies and experimental studies at the same time, even if the authors also provide a moderator or regression analysis. I think it is necessary to separate the effects, as people often look at the mean effect size of a meta-analysis, without considering the quality. Therefore, including the two types of effect sizes together is too misleading and dangerous to use. You could weight for the standard, error, or the sample size, to correct for some of this problem, as is the standard practice for weighing effect sizes. This would correct for some of the bias, caused by the case studies. However, I would imagine even this would not be appropriate. Truthfully, I think the best methodology would be to simply report on the effect sizes completely separately. 

 

One alternative option could be to weigh for the study quality. In example, the Plonsky guide suggests that case studies be interpreted as being inflated 1.55 fold. Therefore, we could divide the effect sizes of case studies by 1.55. Of course, this might be too simple of a process, as Plonksy’s guide is actually on a curve, with smaller deviations being more meaningful, with higher overall effect sizes. 

 

Okay, away from my favorite nerdy topic (research methodology), and back to EBLI. Realizing, I needed to answer this question, I reached out to Nora Chahbazi again, the creator of EBLI and asked her if she would be willing to send me her case studies. She sent me 8 case studies. However, all but one of these studies did not have the data required to calculate an effect size. The one study that did, was an unpublished case study by Mat Burns, and Mike Gallutia, written in 2005, based on data from 2004. That said, despite this study being a case study by design, it did have three big positives. It was written by one of the most accomplished reading science researchers, it used standardized assessments, and it had an above average sample size. 

 

The study examined 149 students, between grades 1-11. The study looked at core instruction for 1 year in grade 1-3, as well as short bursts of intervention instruction for students in grades 2 and up. 

As you can see here, the average impact of EBLI in this study was almost double Plonsky’s benchmark for strong evidence. Therefore, the evidence in this case study indicates a high level of progress for the EBLI students. 

 

Phono-Graphix, is another popular speech to print program. In 2022, I did a systematic search for Phono-Graphix studies on Education source and the Phono-Graphix website, I located 9 case studies, 4 of which had enough detail to calculate an effect size. These 4 studies had small sample sizes, with an average sample size of 17.87. However, they did all use the same standardized assessment as the EBLI study (Woodcock). Across these 4 studies, I found a mean effect size of 1.27, 95% CI= [.16, 2.37]. According to Plonsky’s guide the phono-graphix studies also showed strong evidence of efficacy, albeit smaller than that of EBLI. To better compare the impact of speech to print programs,in case studies, I conducted a meta-analysis analysis of the 5 above referenced case studies.  

According to these results, EBLI and Phono-graphix showed strong evidence of efficacy; whereas Reading Simplified and SPELL-Links showed moderate evidence of efficacy. That said the Reading Simplified and SPELL-Links studies were of much higher quality and their results might be therefore deflated. I decided to conduct one final analysis. In this analysis, I compiled all of the experimental, quasi-experimental, and case studies together. However, I weighed the case studies, by dividing their mean effect size by 1.55, as according to Plonsky’s review case studies on average showed results that were 55% higher than for experimental and quasi-experimental studies. These results are likely less accurate than looking at the two study designs separately; however, they allow us to get a more global picture of the impact of speech to print programs like EBLI.

As can be seen from the above results, the impact of Speech to Print is very promising. Which brings me back to the original questions that inspired this article: 

  1. What do I think of the EBLI reading program? Qualitatively, I like the EBLI program. It is very systematic and presents a phonics curriculum in a more linguistically accurate methodology. I especially like the EBLI method, for older struggling readers. EBLI has one study that met my criterion for review. It showed a very high impact on learning, for all grades. While one study alone is never definitive, there have been many high quality studies on similar programs and all of them have shown very positive results. 

  2. What do I think of speech to print approaches to phonics? I think we have sufficient experimental evidence to say that a Speech to Print approach is highly effective at helping young students and older struggling readers learn how to read. 

  3. Do I think a speech to print approach is better than a traditional phonics approach?. While there are many studies on speech to print approaches, to the best of my knowledge, none of these studies specifically compared a print to speech approach to traditional phonics, or a systematic synthetic approach to phonics. It is therefore difficult to say that either approach is better, as to the best of my knowledge, this comparison has not been specifically studied. Overall, both types of approaches show strong evidence of efficacy, when compared to business as usual control groups. There exists studies on programs that are both speech to print and print to speech showing strong evidence of efficacy. Specifically, both, Lexia, and Reading Simplified show some of the highest evidence of efficacy, in my opinion, despite being very different approaches. 

 

That said, while the mean effect size for structured literacy in my past research and the mean effect size for speech to print approaches are similar, I have noticed that the speech to print approach studies show more consistent results. Whereas, with the print to speech studies, we seem to have a wider distribution of effect sizes, with some very high results, such as the studies on Empower, Lexia, and Corrective Reading, but also some very low results, such as the studies on Wonders and Open Court. Conversely, I have yet to find a single speech to print studies with statistically insignificant results. 

 

While I do not yet feel confident answering this question definitively, I remain cautiously optimistic about speech to print approaches and will likely continue to use such an approach in my own classrooms. I especially like the speech print approach, when working with older struggling older students; however, there is also strong evidence that it can work for younger students. Moreover, I hope that this will be an area for future focus of study, in reading science. Specifically, I would like to see more studies on EBLI and more studies comparing different types of phonics approaches. 

 

EBLI Final Grade: B+ One case study, showed a mean effect size of 1.63 on the Woodcock, standardized assessment. 

 

Qualitative Grade: 10/10 In my opinion, the EBLI program contains all essential types of literacy instruction. 

 

Disclaimer: Please note that this review is not peer reviewed content. These reviews are independently conducted. Pedagogy Non Grata, does not take profit from conducting any program review found on this website.  

Written by Nathaniel Hansford: teacher and lead writer for Pedagogy Non Grata

Last Edited 2023-04-08

 

References:

 

-Aiken. (2020). Targeted Reading Intervention Teacher Certification: An Approach to Building and Sustaining Teacher Expertise in Rural Schools. Literacy Research and Instruction., 59(4), 346–369.

-Amendum, S. J., Bratsch, H. M., & Vernon, F. L. (2018). Investigating the Efficacy of a Web‐Based Early Reading and Professional Development Intervention for Young English Learners. Reading Research Quarterly, 53(2), 155–174. https://doi-org.ezproxy.lakeheadu.ca/10.1002/rrq.188

-Amendum, S. J., Vernon-Feagans, L., & Ginsberg, M. C. (2011). The Effectiveness of a Technologically Facilitated Classroom-Based Early Reading Intervention. Elementary School Journal, 112(1), 107–131.

Background, features, and Results. https://eblireads.com/wp-content/uploads/2021/10/EBLI_results_Woodcock-Johnson-2.pdf?fbclid=IwAR1qCyUXjgwfQlHv2l5WAX-jiiFlpRmh6lxHNZv0njngbmELzAtcibjQ4cg "

-Bratsch-Hines, M., Vernon-Feagans, L., Pedonti, S., & Varghese, C. (2020). Differential Effects of the Targeted Reading Intervention for Students With Low Phonological Awareness and/or Vocabulary. Learning Disability Quarterly, 43(4), 214–226. https://doi-org.ezproxy.lakeheadu.ca/10.1177/0731948719858683

--"Burns, M & Gallutia, M. (2005). Evidence-Based Literacy Instruction (EBLI)

Background, features, and Results. https://eblireads.com/wp-content/uploads/2021/10/EBLI_results_Woodcock-Johnson-2.pdf?fbclid=IwAR1qCyUXjgwfQlHv2l5WAX-jiiFlpRmh6lxHNZv0njngbmELzAtcibjQ4cg "

-Carmen, Et al. (1996). A New Method for Remediating Reading Difficulties. Annals of Dyslexia, Vol. 46,1996. Retrieved from <https://www.phono-graphix.com/pdfs/research/OrtonAnnals.pdf>.
-Coalition for Evidence (2013). Demonstrating How Low-Cost Randomized Controlled Trials Can Drive Effective Social Spending: Project Overview and Request for Proposals. Washington D.C. http://coalition4evidence.org/wp-content/uploads/2014/02/Low-cost-RCT-competition-December-2013.pdf

-D’Agostino, J. V., Lose, M. K., & Kelly, R. H. (2017). Examining the Sustained Effects of Reading Recovery. Journal of Education for Students Placed at Risk (JESPAR), 22(2), 116–127. https://doi.org/10.1080/10824669.2017.1286591

-Dias, K. & Juniper, L. (2002). Phono-Graphix - who needs additional literacy support? An outline of research in Bristol schools. Support for Learning 17, 1, 34-38

-Duncan, E. (1998) Brook Knoll School Pilot Study. Retrieved from <https://www.phono-graphix.com/pdfs/research/brookknollpilot.pdf>. 

-Duncan, Erin (2002): Meta Summary of International Phono-Graphix Research, paper presented to the HAAN Foundation

-Endress, S. A. (2007). Examining the effects of Phono-Graphix on the remediation of reading skills of students with disabilities: a program evaluation. Education & Treatment of Children, 30, 2.

-Fitton, L., McIlraith, A. L., & Wood, C. L. (2018). Shared Book Reading Interventions With English Learners: A Meta-Analysis. Review of Educational Research, 88(5), 712–751. https://doi.org/10.3102/0034654318790909

-McLernon, H., Ferguson, J., & Gardner, J. (2005). Phono-Graphix: Rethinking the reading curriculum. In Learning to Read and Reading to Learn. E. Kennedy & T. M. Hickey (Eds.). Dublin, Ireland; Reading Association of Ireland.

-NRP. (2001). Teaching Children How To Read. Retrieved from <https://www.nichd.nih.gov/sites/default/files/publications/pubs/nrp/Documents/report.pdf>. 

-Palmer, S. (2000). Assessing the benefits of phonics intervention on hearing impaired children's word reading. Deafness & Education International, 2, 3, 165-178.

-Plonsky, Luke & Oswald, Frederick. (2014). How Big Is "Big"? Interpreting Effect Sizes in L2 Research. Language Learning. 64. 878-912. 10.1111/lang.12079

-Simos, P., et al (2007). Intensive instruction affects brain magnetic activity associated with oral word reading in children with persistent reading disabilities. Journal of Learning Disabilities, 40, 1, 37-48.

-Vernon-Feagans, L., Gallagher, K., Ginsberg, M. C., Amendum, S., Kainz, K., Rose, J., & Burchinal, M. (2010). A Diagnostic Teaching Intervention for Classroom Teachers: Helping Struggling Readers in Early Elementary School. Learning Disabilities Research & Practice (Wiley-Blackwell), 25(4), 183–193. https://doi-org.ezproxy.lakeheadu.ca/10.1111/j.1540-5826.2010.00316.x

 -Vernon-Feagans, L., Kainz, K., Amendum, S., Ginsberg, M., Wood, T., & Bock, A. (2012). Targeted Reading Intervention: A Coaching Model to Help Classroom Teachers With Struggling Readers. Learning Disability Quarterly, 35(2), 102–114. https://doi.org/10.1177/0731948711434048

-Vernon-Feagans, L., Kainz, K., Amendum, S., Ginsberg, M., Wood, T., & Bock, A. (2012). Targeted Reading Intervention: A Coaching Model to Help Classroom Teachers With Struggling Readers. Learning Disability Quarterly, 35(2), 102–114. https://doi-org.ezproxy.lakeheadu.ca/10.1177/0731948711434048

-Vernon-Feagans, L., Kainz, K., Ginsberg, M., Hedrick, A., & Amendum, S. (2013). Live Webcam Coaching to Help Early Elementary Classroom Teachers Provide Effective Literacy Instruction for Struggling Readers: The Targeted Reading Intervention. Journal of Educational Psychology, 105(4), 1175–1187. https://doi-org.ezproxy.lakeheadu.ca/10.1037/a0032143
-Walker, J. (2018). Cognitive load theory, element interactivity and phonics teaching. The Literacy Blog. https://theliteracyblog.com/2018/08/15/cognitive-load-theory-element-interactivity-and-phonics-teaching/?fbclid=IwAR1IOLx6irEAvZSXqejxwKAktHUyDlejz1Xjm94z_t7taQ8_6YrUolnEmKE 

-Walker, J. (2019). The beautiful simplicity in McGuinness’s prototype. The Literacy Blog. https://theliteracyblog.com/2019/08/06/the-beautiful-simplicity-in-mcguinnesss-prototype/

-Wright, M. & Mullan, F. (2006). Dyslexia and the Phono-Graphix reading programme. Support for Learning, Volume 21, 77-84

Subscribe Form

Thanks for submitting!

7052091873

  • Facebook
  • Twitter
  • LinkedIn

©2021 by Pedagogy Non Grata.
Here at Pedagogy Non Grata we're proud to bring you high quality education research for free. However, server costs are not free. If you like our research consider donating on our Patreon page, to help us keep delivering more content for free: https://www.patreon.com/user?u=70587114

bottom of page