Welcome to another article here on Vocal. I'm Jared Rimer. This is the second installment of a multipart series that delves into how blind and visually impaired people go about using the computer. In the first article, I talked about screen readers, mentioned some specific programs that came out around the time I started using a computer in the early 90s, and provided a basic understanding on how it all works. Part 2 will focus on using screen readers specifically on Windows.
My first encounter with this operating system was in the days of Windows 3.1 and a screen reader called Window-Eyes. It was a brand new experience going from a command line in DOS, to a graphical user interface like Windows. The instructions were all on 4-track cassette, and it required many hours of learning.
The screen reader, no matter what you chose back then was difficult to learn. It was very interesting how you could press a key, like the alt key, and it told you there was a menu bar. Then, arrowing around the menu gave you further options and learning how the operating system worked was very cool.
The reason why I liked Window-Eyes, was because I am a partially blind user who can see the mouse in the right conditions. Window-Eyes would read me what was under the cursor. JAWS did not have that capability until version 18, which was released April 2017. I was happy to see this addition added to JAWS since Window-Eyes stopped development in May 2017.
As Windows changed, the screen reader changed with it. The current operating system works the same with Window-Eyes and JAWS, meaning it reads from the video card and tells you what it sees. It does this with drivers and hooks into programs (where practical) to get the information from the program or the operating system, to you. The model is called DOM or document object model in some programs.
If you open a document using Notepad or another word processing platform, the reader will even specify particular attributes, if it is commanded to do so in its settings. With Notepad, for example, you learn that you can only save in text format, and it will read any types of errors and such that the operating system may throw at it for any reason, even in the simple program. Hearing what is underlined, bolded, italicized, the different types of fonts, and other characteristics is different for someone who can't see because they can make their paperwork to specifications to their employer or school.
Speaking of writing, both of the readers have options to read lines, paragraphs, characters and words from the cursor to the end of the document. This can all be stopped by a keystroke at any time. You can also use the mouse with the keyboard. Those types of commands are called "mouse movement keys."
The mouse can move anywhere on screen with all of the major readers. Each screen reader has their own set of commands and they have a chance to innovate to make the experience of using the operating system the best way that works for you.
The other thing that screen readers can do is tell you about specific elements you encounter such as a combo box, check box, radio button, and buttons like the OK or cancel button. After I learned this, I remember one time my father was telling me to click the arrow, and he clicked it, I said: "Thats a combo box." Sighted people know what they are looking for, but we get more information than an arrow to click on. We're also told information if a menu opens more options. It's called a sub-menu or in the case of a dialogue, an ellipses or some other aspect for us to know that we're interacting with something that will open something else when we press 'enter'. Some options in menu say checked, and when pressing enter, it un-checks it. The reader tells the blind or disabled person this through speech, and now also in braille (through refreshable braille displays).
Some commands that may be of use
Let's talk about some of the commands that you may end up using on your day to day operation of the screen reader.
- Read to end
This is the command I mentioned that allows you to read from the cursor to the end of the document, known as the "read to end" command. With the former Window-Eyes, using the older Vocal-Eyes layout, it was a simple alt+r with r standing for read. With the newer layout that was developed later on, the same command is done by pressing ctrl+shft+r with r for read. The command can be changed within the reader's options if you wish. With JAWS, it is ins+down arrow with the down arrow standing for the next line. This command could be changed with the keyboard manager within JAWS, however, I have found in my personal experience that changing commands like this one would be better using Window-Eyes hot key options found within the program.
Working with two different readers, you've got to do your best to remember which command does what, depending on the reader. Especially when I introduce a third reader to the mix in another article. For now, I can tell you that if you haven't used a reader for a while, you have to remember that it isn't this command to read, it's that command to read. You pick it up quickly. If you use three readers, which sometimes is the case, then you have multiple sets of commands to learn.
- Reading by line, sentence, and paragraph
While I don't personally use these keys, there are keys to read by line, paragraph, and sentence. With JAWS, it's insert plus the numbers on the numeric calculator keypad. They may even be undefined if the screen reader manufacturer decided to leave it set that way. I couldn't imagine setting hot keys for all of the commands that could've been supported by Window-Eyes hot key manager. I forget the exact number, but between the keyboard and mouse, there is a lot of customization. I forget at the moment what the Window-Eyes equivalent for reading by units are, but the point here is that they can be different, and you have to remember each one.