Tuesday, November 27, 2007

Using software in foreign vs. second language environments

Tutorial software used in a foreign language environment can benefit from the value added by a language teacher’s L1-specific instruction, while bilingual resources in software used in a second language environment may pose problems.

In a FL learning environment (learning Chinese in Boston), where your students all speak the same L1 (as NS or NNS), you can address issues in learning the target language specific to particular L1 speakers. In a SL learning environment (English in Boston), your students might come from many different first language backgrounds, making bilingual instruction or support unavailable or at least impractical (in addition to being undesirable in a direct-approach inspired intensive language immersion environment).

Consider a pronunciation program, such as Pronunciation Power, which presents activities to discriminate and produce 52 English phonemes (at right). In a FL environment, students could be knowledgeably instructed which sounds to focus on because they do not exist in their L1, and which to recognize as common with their L1. This is highly unlikely to occur, to much effect, in a heterogeneous SL environment.

American school children, compared to, say, the French, do not, by and large, get much overt grammar instruction, in the form of grammar and syntax. They might be introduced to some vocabulary expansion techniques via morphology, but otherwise not a lot of meta language instruction.

Which might explain why I didn't know much about English grammar until I started studying French. I found a slim volume titled English Grammar for Students of French, part of a series that included other undergraduate staples, like Spanish and Italian. Structural approaches to FL instruction, as French was traditionally taught (back in the day), used grammar terminology as the common language, when, as I suggest, American students don't know their present progressive from their indirect object. In a way, then, this little primer proffered information to me when it was relevant and immediately applicable (digesting French grammar instruction) rather than abstract (accompanying native language instruction). Similarly, a FL teacher can add value to pedantic material, such as tutorial CALL software, by making the same kind of contrasts and comparisons with the students’ first language and perhaps taking a focused and selective approach to using the software, as in the case with studying pronunciation.

So the same program may be more appropriate for or implemented very differently in a FL environment than a SL one. Moreover, multilingual resources in software, which not all users in a SL environment may benefit from--depending on their first language and whether it's represented in the software--may be problematic to use. In addition to some students feeling left out, software with bilingual features also presents less of a linguistic challenge to other students, whose L1 is represented, because they can lean on their L1 for instruction.


From EuroTalk's Talk More American English, multilingual help.

This all gets back to a previous discussion of evaluating software, where we saw that one must be familiar with the local environment—where the software will be used, how it will be implemented, and who will use it—in order to make the most relevant determination of its utility.

Wednesday, November 21, 2007

Why we use computers in education

We can take a thoughtful, knowledgeable approach to evaluating educational software, diligently following our checklist, guide, or principled framework of choice (see previous post) and still overlook some fundamental issues in computer-aided instruction, issues that strike at the core of why we use computers in education in the first place.

Some titles, let's say a grammar or writing reference, might strike some educators as being not much different from a textbook. It might even offer the same exercises from the structuralist's or behaviorist's playbook. Even so, the computer provides the better medium:

Stimulation. No text can match the graphical potential of the computer display. Most texts are printed in black only, or perhaps with one ("spot") color in a range of hues. Printers charge per color: the fewer, the cheaper the job. Not so the computer monitor, which deals in millions of colors (16.7 million, reportedly). And while most programs use "interactive" as a marketing buzzword more than a design criterion, they do stimulate and engage students in ways that textbooks can't.

Multimedia. Software can incorporate audio, video, and animated graphics to supplement textual information where the textbook must make text and graphics alone suffice.

Internet extension. Software can provide links within to external resources, such as publishers' companion sites, which offer updated and expanded resources as well as learner communities.

Feedback. Textbooks offer feedback in the form of an answer key at the back of the book, which usually doesn't go beyond confirmation of correct or incorrect. Software can offer it immediately, embedded in response to each incorrect or correct answer and include links to internal or external reference material. Moreover, the type of feedback is far more useful to the learner as it may be explanatory, diagnostic, or elaborative, providing far more information than simply confirmation of right or wrong.

Yet, some critical positions on educational software seem to compare what a particular software title offers not to a comparable book but to what a teacher could offer a student. Tutorial CALL software occupies a place somewhere between human instruction and a textbook. It needs to be assessed in its place on this continuum and not with the poles.



Guidance
.
A learner studying on his own with a tutorial CALL program, outside of an educational setting, lacks some critical support that a teacher with CALL experience could provide:

Software selection.
An experienced CALL teacher is in a much better position to assess the pedagogical value of a piece of software than an independent learner, who may be, for the most part, at the mercy of wildly unrealistic marketing hype as his only source of information about the program. This is especially treacherous when navigating the waters of retail programs marketed to individuals, as opposed to more elaborate systems marketed to institutions. With the former, the company need only concern itself with making each individual sale, not with satisfying the more rigorous quality criteria of an informed institutional client with wide purchase decision-making responsibility.

Orientation.
A teacher can introduce a student to the functionality and resources of a program, explain its goals, demonstrate its procedures, and perhaps warn against some pedagogically unsound or confusing parts, all with the objective of maximizing the program's language teaching potential.

Direction.
An experienced language teacher can assess each student's specific needs and recommend a custom study plan for the use of software, rather than leaving a learner to stumble through the program in haphazard fashion or even work through it in a linear fashion, which may not be an effective use of his time.

Thursday, November 8, 2007

Context in Language Software Evaluation

Language teachers are frequently called on by their school or department to evaluate tutorial CALL software for students to use in class or in a self-access lab on their own. Such evaluations are seldom carried out by someone with all the relevant skills and experience to conduct a pedagogically sound evaluation or, if so, may not apply a principled approach to evaluation. Typically, an evaluation takes the form of a superficial sight-seeing trip through randomly selected parts of a program until one tires of the experience and a feeling from the gut urges "use it" or "forget about it."

What is tutorial CALL?
We're talking here about pedantic, tutorial software that offers explicit language instruction, not generic computing or CMC tools that might be used in a language learning environment. Evaluations may be for
  • software
  • websites
  • courseware
Some common criticisms of tutorial CALL

"Students don't learn a language with a computer program."

True, but only the likes of Rosetta Stone and others traffic in the kind of absurd marketing claims that software alone can teach language. Moreover, with only a textbook students won't learn to communicate in a language either, but we still use it to provide structure.
Teachers experienced in using technology in the classroom know that tutorial CALL programs supplement classroom activities; they don't replace them. And they only accomplish that if well chosen.

"The problem with the software is that you don't know if students are using it."

Again, this is only true if we chose the wrong software. If we looked broadly at "educational software," we would see that the vast majority of titles are designed for the retail market (individual not institutional use). The design criteria there seems limited in scope to flashy graphic features that reproduce well in printed ads, rather than the more bothersome design assumptions involving language teaching methodology, what Hubbard calls "teacher fit" (approach), or the theory that describes ideal conditions for instructed SLA and the construction of tasks that provide those conditions (Chapelle).

Retail programs usually do not concern themselves with providing mechanisms for accountability. Institutional settings may require the software to report student time on task as well as scoring, either through e-mail notification, logs accessible by the teacher, or some kind of built-in drop box. More elaborate programs used in schools integrate these kind of LMS features.

Purposes of evaluations
  • to make a purchase or implementation decision on software, a process that effectively ends when this practical outcome is reached (a decision-driven evaluation);
  • to give design feedback in the developmental stages of software (a formative evaluation);
  • for research motivated by a hypothesis or open-ended question;
  • for a published software review (a summative evaluation).

General problems with evaluations
  • Evaluators use different criteria.
  • Evaluators are informed by different interests, knowledge, and experience.
  • Evaluations lack consistency across reviews.
  • Evaluations lack of inter-rater reliability.
The dilemma facing most evaluation situations is that while only a local decision can take the specific learning environment and population into account, only one based on a principled approach (see below) by someone with a grounding in language teaching methodology, instructional design, and of course content expertise can render an evaluation that's valid and consistent. Teachers selecting their own software tend to evaluate subjectively based on their own teaching and learning experience, computer literacy, and personal preferences.

Types of evaluation
If we look specifically at summative evaluations of language learning software, we find the following common approaches:
  • Checklists
  • Guides
  • Surveys
  • Principled frameworks
Checklists, the most common approach, offer a set set of questions, usually binary options or fill-in. They are simple to follow and may raise awareness among teachers inexperienced in CALL of the wide range of factors to be considered. They are more meaningful if questions elicit commentary.

Criticisms of checklists abound:
  • Terms used are not defined, used inconsistently, or open to varied interpretation.
  • Elements are not weighted; some influence may seem disproportionate.
  • Their simplicity belies the need for background knowledge and experience to accurately, appropriately respond.
  • Questions are little more than lists of features to look for.
  • They are focused on technology more than teaching and learning (language learning potential).
  • They lack reliability, validity.
For a checklist example, see the Software Evaluation Guidelines by the National Center for ESL Literacy Education (2003). This checklist addresses technical and pedagogical issues but not methodology specifically, which is a common omission. Among the questions:

"Do the individual program lessons fit within the time constraints of class or lab sessions so that a learner can finish a lesson in one sitting?"

This question seems to assume some validity in the use of tutor-type software in a class—at the expense of human instruction and more meaningful and authentic student-student or student-teacher interaction. For the most part, aside from introducing the functionality of tutorial CALL software to students, class is not the place to work on these programs. The
tutor mode of computer use implies the absence of a teacher and the presence of a virtual teacher, if you will.

Guides are what I would describe as a hybrid between a checklist and more discursive prompts for thinking through, if not formally evaluating, the pedagogical value and instructional design efficiency of software. I created such a guide ten years ago, A Guide for Evaluating Language Learning Software.

Surveys assess student or teacher response to software or courseware after a considerable period of use, such as a semester.

Principled frameworks* represent an organizing scheme to characterize relationships between elements of language teaching and learning and computer use. Among the best known and most often referred to in the filed include

  • Philip Hubbard's framework:
    • methodology driven
    • emphasizes need for evaluator to understand LL approach taken in design and fit to instructional approach
    • Non-hierarchical model**
      • Teacher fit (approach): assumptions about the nature of LL in light of what’s possible with computer-aided instruction.
      • Learner fit (design): realization of approach: syllabus, tasks, activities, language difficulty, skill focus, roles of teacher and learner materials.
      • Operational description (procedure): the form the approach and design take in the program: layout, activity type, feedback.

  • Carol Chapelle's framework
    • theory-based, task-oriented
    • driven by interactionist position***
    • focused on design and structure of LL task
    • evaluation can be judgmental at initial selection based on how well suited it appears to be and it can be done empirically based on data from actual student use


  • CALICO Journal Software Review (Jack Burston)
    • descriptive not prescriptive
    • discursive not intuitive
    • software requirements (must meet first two and some combination of last three)
      1. Pedagogical validity
      2. Curriculum adaptability
      3. Efficiency
      4. Effectiveness
      5. Pedagogical innovation

      Four categories, based on Hubbard’s framework, form the template for evaluation for the journal:

      1. Technical features
      2. Activities (procedure)
      3. Teacher fit (approach)
      4. Learner fit (design)


      The CALICO software evaluation template thus presents a consistent qualitative measuring device.


    • *As described by Levy and Stockwell in CALL Dimensions: Options and issues in computer-assisted language learning, pp 59–64.
      **Based on Richards and Rogers model (1982, 1986) of Approach, Design, and Procedure.
      ***Language is a rule-governed cultural activity learned in interaction with others; environmental factors are more dominant in language acquisition, as opposed to innate abilities of the nativist position.



Saturday, November 3, 2007

The Thinking Person's Wiki

Wikis are all the rage in education. And why not? They represent the phenomenal improvement in ease-of-use and separation of content from form. In Web 1.0 days, putting content on a Web page was a feat that required considerable technical skills and time, much of which was spent on form--that is, formatting text, manipulating placement, coding relative and absolute links, etc.--much of it to force a Web page layout to simulate the printed page.

With wikis, students can create pages, add links, gadgets, RSS feeds, embedded media--images, audio, and video--with hardly a thought about how this happens. It is genuinely point-and-click, and students can finally concentrate on content. Formatting of content in a wiki doesn't move beyond the complexity of RTF or what you might do in a word processor.

What else? They offer a virtual community for students to work collaboratively, encourage peer review, demonstrate the importance of responsible network citizenship, and even make for an alternative tool for group presentations. Mistaken deletions or edits can be rolled back to a previous working version (a fix for sabotage or inadvertent changes), and teachers can monitor student participation. Collaboration can serve as "crowd sourcing," where a teacher charges students with creating some specific content (e.g., guidelines, evaluation criteria, tests) instead of simply providing it himself, thereby harnessing the collective intelligence of many. (Watch this Campus Technology webcast about wiki use in business schools.) This doesn't work for all crowds. That prospect must be evaluated beforehand.

There are free versions for education (see pbwiki), and there's no software to maintain locally for all this functionality beyond a free Web browser and an uninterrupted broadband Internet connection (which might not be free).

So wikis are wonderful, but there's a catch: they require thoughtful, responsible users who understand the public nature of the space. What are some of the issues that these enlightened users must grapple with?

Pedagogical value
While sound pedagogy should always drive the curriculum, sometimes technology pulls it. Teachers and students may be awed by technology and lured into using it before they have a compelling use for it. Your students can learn to embed a YouTube video on a wiki page in minutes, but what's the pedagogical value of doing so? In their zeal to either keep up with their students or push the envelope and impress them with their command of Web 2.0 mash-ups, some teachers evidently believe that any task pursued in a constructivist vein that maintains students' interest is justified. It might score points on teacher evaluations, but what is the effect on language learning?

The trick is to think it through. Don't merely have students add media to a page but do so with a purpose. For example, find a video that might model grammar, vocabulary or usage of particular interest and follow up with an activity that elicits specific language, such as with a description.

Structure
Wikis can seem somewhat amorphous for new users. With no apparent structure, no linear site map of old, users must be shown how it grows somewhat organically with new pages added, sidebar links fleshed out, and meta pages created to organize distinct groups of content. But unlike static HTML pages--Web 1.0 authoring--this form takes shape during content creation and not in anticipation of it.

Authority
Who controls a wiki? Everyone and no one. Certainly the person that creates it also administers it, but for the most part that's limited to deciding how long it lives and how users access it for editing. Otherwise, the idea of authority must be considered diffuse, shared equally among users, and self-policing in order for this communal space to work effectively as a whole.

Controversial topics
The Wikipedia entry for "Armenian Genocide," one of considerable depth, engenders far more words in the accompanying comment and discussion pages than in the entry itself. That a wiki is a collaborative environment suggests that some didactic can and should emerge, not a reasonable prospect for some topics--abortion, gay marriage, war, George W. Bush come to mind. Such topics would more fruitfully be discussed in an environment where individuals don't feel momentarily empowered to speak for the group or proffer a final statement.

Comments
Should a language learner's writing be subject to comment by strangers? Would the ego-destructive effects of negative criticism of content or form outweigh the possible benefits of having such an outlet for expression to begin with? Well, it seems like a reasonable concern, but consider that language learners, in a second language learning environment anyway, are subject to such negative feedback (misunderstanding, ridicule) to their production as they engage native speakers in authentic contexts outside the classroom. They are not protected from such feedback so must instead be empowered with the wherewithal to take it in stride.

Consider also that most students' wiki or blog entries are not likely to attract outside attention to begin with, so the danger is likely statistically insignificant. As a safeguard, wiki and blog comments can be moderated by authors (aka cherry-picked), so undesirable comments will not remain. Guest access or comments can also be disabled altogether.

Participation
Because they represent a team effort but through individual contributions, wikis enable freeloaders, users who do not contribute as much as others but reap the benefits of the collaborative product. Some of this can be tracked through access logs, but not with practical effectiveness. In this regard wikis suffer just as public radio does: a listener can benefit from the commercial-free programming whether or not he contributes.

Public nature of content
If you're concerned about your wikis, blogs, Google Docs, comments, or Picasa or Flickr photos being subpoenaed, you should probably rethink what you're doing in those spaces. It's hard to imagine content created and shared by reasonably thoughtful teachers or students for language learning purposes as posing a threat of criminal prosecution or civil litigation. But if that does concern you, keep in mind that your e-mail and hard drive can be subpoenaed as well.

Permanence
Your class might spend an entire semester contributing to and refining content on a class wiki. Will all that work vanish at the end of the semester? It might. Or it could easily be archived by creating a meta page with links to pages created by a particular class. Unlike Wikipedia, student work on a language class wiki is more about process than product; the value thus derives from the experience of contributing. If not, archive the work or have students move what they want to their own personal wiki, blog, or social networking page.

Conclusion
We must conclude that such Web 2.0 applications, or hosted services, as wikis entail greater user responsibility than a visitor to a Web 1.0 site. We are no longer merely information consumers in a one-way lecture with simple assumptions about authority and credibility, but participants in a much more complex web of ambiguous identities and an organic dialectic. With this democratization of control comes a distributed burden on contributors to be enlightened and responsible. It's not supposed to be easy, but it may just lead to a more intelligent information and communication system.