The following article summary is taken from Nicomsoft.com. Read it in its entirety here:
The process of converting an image to an editable document is divided into several steps. Every step is a set of related algorithms that do a piece of the OCR job. The general steps in the OCR process are as follows:
Loading an image as bitmap from a given source.
Detecting the most important image features, such as resolution and inversion.
Image can be skewed, or it can have a lot of noise, so de-skewing and de-noising algorithms are applied to improve the image quality.
Many OCR algorithms can handle bi-tonal images only, so color or grayscale images must be converted to bi-tonal.
Lines detection and removal.
Page layout analysis (also called "zoning").
Detection of text lines and words.
Combined-broken characters analysis.
Recognition of characters.
Saving results to the selected output format, for example, searchable PDF, DOC, RTF, or TXT.
It is not a complete list. A lot of other minor algorithms must be also implemented to achieve good recognition on various image types, but they are not principal in most cases and can vary in different OCR systems.
Every OCR step is very important. The whole OCR process will fail if any step cannot handle the given image correctly. Every algorithm must work correctly on the highest range of images, that is why there are only a few good universal OCR systems in the market. On the other hand, if some features of given images are known, the task becomes much easier. You can get better recognition quality if only one kind of image must be processed. To achieve the best results, if some features of the images are known, a good OCR system must be able to adjust the most important parameters of every algorithm. Sometimes that’s the only way to improve recognition quality. Unfortunately, even now there are no OCR systems that are as good as humans, and it looks like such systems will not be created in the near future.