Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken:
- Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification;
- Bugs are coding mistakes e.g. the Pentium FDIV bug affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated;
- Config and management errors are mistakes in the configuration and management of a system e.g. disabling controls such as antivirus, backups and firewalls, or neglecting to patch systems to fix known issues;
- Typos are mistakes in the data entered by users including those who program and administer the systems;
- Further errors are associated with the use of computers, computer data and outputs e.g. misinterpreting reports, inappropriately disclosing, releasing or allowing access to sensitive data, misusing computers that are unsuited for the particular purposes, and failing to control IT changes;
- 'Deliberate errors' include fraud e.g. submitting duplicate or false invoices, expenses claims, timesheets etc. using accidents, confusion, ineptitude as an excuse.
Set against that broad backdrop, do computers as such ever make mistakes? Here are some possible examples of true "computer errors":
- Physical phenomena such as noise on communications links and power supplies frequently cause errors, the vast majority of which are automatically controlled against (e.g. detected and corrected using Cyclic Redundancy Checks) ... but some slip through due to limitations in the controls. These could also be categorized as physical incidents and inherent limitations of information theory, while limited controls are, again, largely the result of human errors;
- Just like people, computers are subject to rounding errors, and the mathematical principles that underpin statistics apply equally to computers, calculators and people. Fully half of all computers make more than the median number of errors!;
- Artificial intelligence systems can be misled by available information. They are almost as vulnerable to learning inappropriate rules and drawing false conclusions as we humans are. It could be argued that these are not even mistakes, however, since there are complex but mechanistic relationships between their inputs and outputs;
- Computers are almost as vulnerable as us to errors in ill-defined areas such as language and subjectivity in general - but again it could be argued that these aren't even errors. Personally, I think people are wrong to use SMS/TXT shortcuts and homonyms in email, and by implication email systems are wrong in neither expanding nor correcting them for me. I no U may nt accpt tht.