More than half a century ago, a new type of technology, phototype, was heralded for its “ability to be stored in one shallow box, or slipped into one airmail packet, replacing the equivalent of tons of standing metal type.”¹ Today, it is customary to see a pocket bulging with a mobile phone—a different shallow box—that may hold more typefaces than generations of typefounding in previous technologies might have created.
NEW TECHNOLOGY has benefited the typographic representation of writing systems that do not make use of the Latin script–the type used for the languages spoken by most of the world’s population.
Books set in the Devanagari script are one example of improved accessibility and ease of composition. In 1955, it took up to seven cases of [metal] book type (of one size only) for bookwork for an Indian script. “. . . the cost of maintaining a composing room for bookwork can be immense.”² Now, the OpenType Devanagari fonts commissioned and used by Harvard University Press to set volumes of classical texts with accompanying English translations, are installable in a few minutes. In 2019, the Murty Sanskrit font contained 1,025 glyphs plus Latin, totaling 1,492 glyphs. Sanskrit is perhaps one of the most challenging Indian languages to represent typographically, due to the range and complexity of some of the consonantal combinations that may occupy great depth.
In terms of design, digital technology can reproduce letterforms that were previously difficult to replicate cleanly in metal type, providing closer typographic abstractions of manuscript forms. Current technology also caters to neologisms and loan words that relate to contemporary needs with the ability to supply the alternative preferred forms. For instance, the Devanagari fonts can distinguish Hindi, Sanskrit, and Marathi languages for textual composition.
Contextual alternates— essential to some writing systems while only optional to others—were, at first, programmed for Arabic-script composition during the short-lived era of phototype composition. A provision for these is now a standard feature in digital font development. Another improvement is the accurate placement of vowel signs and other marks using OpenType features.
This relies on the meticulousness of the font’s creator, because the positioning is now embedded in the font and not reliant on proprietary software that was used for the first Indian digital fonts designed for newspaper composition in the early 1980s by Linotype. This previously cumbersome method of combining scripts in multiscript documents—even of different writing directions within the same line—has been resolved. This is vital in Asia, where three scripts or more may need to appear simultaneously in one single document or on a road sign.
Indeed, an Indian rupee banknote bears 17 languages in 11 scripts. Designers and digital type foundries have taken pains to harmonize scripts that need to work together—although in wayfinding it is, of course, useful, if not necessary, for signs to have differentiation for ease of navigation. Multiscript typesetting may require compromises in the layout, for example, increased leading, but no longer does the representation of a script have to suffer.
With such improvements and commitments by invested software companies to facilitate the accurate rendering of diverse writing systems, typographic excellence in scripts beyond Latin is achievable. It is therefore perhaps only natural to assume that this would be a golden era for textual communication in diverse scripts for global linguistic communities.
Regrettably, the promise of delivering high readability and therefore effective vernacular textual communication worldwide by means of digital type technologies has yet to be realized. This is not merely due to a paucity of high-quality fonts. Some continue to bear a legacy of limitations imposed by previous technologies, when type was still a three-dimensional object, but also because layout software and input methods remain problematic particularly in non-print situations—computers.
Unfortunately, there is no guarantee that in transmitting a text in, say, the Meetei Mayek script, the recipient will be viewing the same results as the sender. The resulting textual output will depend on the font, which needs to be Unicode compatible (or at least identical to the sender’s character map) and may be affected by the operating platform as well as the composing software.
This is not limited to scripts that are perceived as minority scripts, such as those used in Myanmar. There are also problems with scripts that have over 300 million users. This makes for a continued reliance on sending images of text in the form of screenshots or PDFs. This issue, to some extent, also accounts for the relatively late development of South Asian interactive e-newspapers, which have yet to avail themselves of the fonts specifically designed for online reading. Most either use the same fonts as the print versions (which are flourishing in South Asia) or default fonts provided by the operating system. Indeed, newspapers continue to simply post images of their print versions due to the lack of available resources to set up and maintain a separate electronic edition. Similarly, few e-books in Indian languages are interactive, but rather are generated from image-based PDFs. OCR, the automatic conversion of an image to text, remains an unresolved issue.
Despite the disconcerting difficulties in computer-aided communication for many scripts, traffic on social media in a multitude of languages grows unabated even when such interaction is profoundly affected by unresolved typographic issues. Perhaps the most surprising situation that exists in textual communication in the 21st century is the heavy use of transcription. Social media users differ from those engaged in typography who might use a physical or virtual keyboard specific to their script; the majority of users in India, particularly those of a younger generation, are known to find localised input methods cumbersome, especially on smaller devices such as smart phones or tablets, and prefer to enter the text in Latin script using the ”qwerty” keyboard layout. They might use software such as Google Translate to transcribe into Devanagari, Bengali, Gujarati, and other scripts. However, there are issues of accuracy. It is difficult, for instance, to distinguish between the three kinds of “sa” or the four kinds of “da,” etc., that are present in the Indian philological system.
In some cases, for instance on laptops, some virtual keyboards allow keying in Latin script and provide a drop-down menu in the desired script to select the appropriate word (whose correct form is not always evident to the user more accustomed to reading the dominant Latin script.) Some apps have a built-in transcribing system (e.g. Snapchat), but again, this can result in orthographic inaccuracies. Others like Whatsapp have difficulty rendering certain sequences with vowels. Many users prefer to leave the text in Latin for want of a font in their own script or a clearly readable typeface because the speed of texting and instant readability are important aspects of text messaging by phone.
Unfortunately, the default typefaces used in operating systems, apps and phones are not necessarily the most appropriate for sustained reading in a given language, perhaps chosen by engineers and programmers—who are energetically committed to resolving input methods—rather than by typographers or experienced type designers. Occasionally, the user has the option to change the default font but may be unaware of the possibility or how to do it. The long-term consequences of localized system fonts, which may possess odd quirks (at times to counter anticipated technical issues) should not be underestimated, as in many parts of the world these become the common reading experience for a generation, conditioning their expectations of the textual rendition of their language. With problems in South Asian scripts, it could be assumed that Japanese textual communication might also suffer, given that it makes use of a combined writing system using syllabaries and logographic kanji.
Yet the Japanese have resolved such difficulties in various efficient ways, enabling users to enter text swiftly. The standard keyboard has both Latin and Japanese. The Japanese keyboard uses kana and is arranged according to a traditional Japanese layout which is preferred by dedicated Japanese writers and those less familiar with the Latin script, such as children under ten and the older generation. For others, the Latin script tends to be used as an input method for keyboarding text, by means of either a physical or a virtual keyboard. Syllables are typed in Latin and are automatically converted to hiragana and the user calls for a drop-down menu to convert to either katakana or kanji where appropriate.
The recent macOS does context-sensitive automatic conversion (like predictive text in English) into hiragana, katakana or kanji, and the user can select alternate forms from a drop-down menu via the space or tab key. A similar method is available in iOS, for smaller devices, phones, and tablets. In the case of social media, however, smartphones provide another efficient system using the Japanese syllabary, which can either be swiped or tapped, known as the “flick input” method.
The development of effective text input methods for Japanese, accompanied by an array of high-quality typefaces, is the result of investments unmatched in other parts of Asia. Yet the experience of Persian textual communication is heartening. Persian often suffers from a disparity in the attention given to its typographic rendering in comparison to Arabic. However, the availability of the Persian keyboard on Android phones and, since 2017, on iOS devices, has transformed input entry for Persian texts for many users. As a result, Latin transcription of the Persian language on social media has almost disappeared. Of course, different applications function differently with the input of Persian numerals, which in some apps is a problem. These are inappropriately shifted to the left after keying.
The recently introduced Arabic default font for iOS devices (and therefore for Twitter and Facebook), is notable for its cramped descenders. It has a system of mark positioning that is problematic for reading Persian. The Android version is superior in this regard, as is the slightly more conventional font used on Instagram. However, when compared to the highly regarded typefaces used for Persian text in print media and on sites like BBC Persia, further progress is desired by social media users.
Clearly, experiences differ according to language, script and location, and, inevitably, according to the resources available and priorities given to the different languages and the diverse scripts that populate Asia. It is undeniably frustrating for many millions of users to continue to encounter issues with textual communication, particularly when they could be resolved with today’s technologies and expertise. The potential for improvements in the rendering of vernacular can be realized using a two-pronged approach that addresses both input and layout issues and type design quality. It is no overstatement to suggest that such improvements are vital to fostering and sustaining high levels of literacy in many diverse scripts, particularly those of South Asia.
Technologies of the 21st century facilitated the encapsulation of unprecedented tons of metal type in a device small enough to fit in a pocket, but smart devices are effective means of communication only when there is connectivity. And connectivity can be equally elusive in rural areas of England as in rural areas of India. With many thanks to: Suman Bhandary, Borna Izadpanah, Charles Hollom, Neelakash Kshetriymayum, Toshi Omagari, Aadarsh Rajan, Vaibhav Singh, Hazuki Yonema.