Throughout the history of application development, programmers have repeatedly made the same mistakes, their educators have and continue to inadvertently miss important nuances, and the industry as a whole has allowed these errors to continue forth into the corporate world. The result: many developers who write vulnerable applications out of ignorance; in some cases, developers even think they are securing the application when in fact they are creating the vulnerabilities themselves.
Only once developers are properly educated from the ground up does it become possible to foster the proper methodology and good practices for security-centric and quality-assurance driven development.
Many of the common programming pitfalls responsible for modern vulnerabilities are less than obvious and completely language agnostic; sometimes they are entirely small or seemingly insignificant mistakes that are easy to miss or glance over. These developmental errors are mostly based around large concepts rather than any specific idea, so proper and improper implementations exist in nearly each language. This article focuses on interpreted languages, but this does not mean that the aforementioned vulnerabilities and structural inadequacies exist only in interpreted languages.
The key to being a resourceful programmer is realizing that certain things, generally speaking, may cause a bug. Bugs are what hamper stability, performance/efficiency, and security. By solving any of these problems in their entirety, at least some of the others will be solved in the process(e.g. while tweaking for stability one may find a slight increase in performance).
Bugs include everything from the program improperly displaying funny characters on the screen to reproducible application crashes.
Typically, the greater the impact of the bug on a user's experience (complete crash vs. small hiccup) a bug has, the greater the implications of the bug's existence on other areas, such as security. One should always assume that a bug's effect on any instance or category is directly proportional (if not more so) to its effect on all relevant categories, no matter how insignificant the bug may seem - for example, a bug that marginally decreases a program's efficiency due to a nulled parameter can also be a gaping security hole.
Most bugs are the result of a program receiving unexpected data and using it in an expected way, resulting in the problem. Because the ways that an application is traditionally programmed to receive input are finite, solutions to several types of problems with user input can be assumed. Data may contain illegal characters, exceed an allowed length or maximum value, require a type or sign translation, etc. In order for an application to be able to compensate for this, the developer must make one of two choices: to convert the data, in one way or another, into acceptable data, or to reject the data entirely with an error message and demand re-submission of the input in proper form. Sometimes the input's "illegal characters" may be a part of the application's desired output - making the challenge more difficult.
There are two primary techniques used for input sanitation in interpreted languages: whitelisting and modification. Modification is a process which converts data submitted or inputted by a user, whereas input whitelisting simply checks to see if the input matches a particular criteria. In the case of input modification, the functionality exposed is usually executed regardless of the resulting data, whereas in the case of input whitelisting, the functionality is only exposed if the input meets the whitelist's criteria. If the functionality provided by a whitelisted input is still exposed, the whitelisted input is reset to an arbitrary default value. Some more specific examples of each technique are below:
A. Cancel process if malformatted - Only input matching a particular format will trigger the processing of the inputted data
B. Cancel process if incorrect type - Only input of a particular type (integer, string, negative number, etc) will trigger the processing of the inputted data
C. Cancel process if out of bounds - Only input of a particular length, between two particular lengths, or between two particular values will trigger the processing of the inputted data
D. Cancel process if not whitelisted - If the inputted data is not in a list of allowed choices, it is not processed.
A. Sanitize-by-reformat - Characters or strings which may be malicious in user input are escaped or encoded to make them safe for evaluation
B. Sanitize-by-typecast - User input is forced into the correct data type. (Casted from string to integer, or vice versa).
C. Sanitize-by-boundary - When a user input reaches a particular size, all additional input is ignored. In some cases (e.g. decimals) this could be a string truncation, an integer whole number cast, or rounding.
D. Sanitize-by-delete - Characters or strings which may be malicious are completely removed from a user input.
You may notice that whitelisting technique "A" corresponds to modification technique "A", "B" to "B", and so on. In many cases, a combination of both modification and whitelisting is used for sanitizing. Performing these steps in the wrong order, or even inadvertently casting a value to the wrong data type may prove costly.
Pages in category "Secure programming"
The following 12 pages are in this category, out of 12 total.