What is the real difference between acceptance tests and functional tests?
What are the highlights or aims of each? Everywhere I read they are ambiguously similar.
Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.
Audience. Functional testing is to assure members of the team producing the software that it does what they expect. Acceptance testing is to assure the consumer that it meets their needs.
Scope. Functional testing only tests the functionality of one component at a time. Acceptance testing covers any aspect of the product that matters to the consumer enough to test before accepting the software (i.e., anything worth the time or money it will take to test it to determine its acceptability).
Software can pass functional testing, integration testing, and system testing; only to fail acceptance tests when the customer discovers that the features just don't meet their needs. This would usually imply that someone screwed up on the spec. Software could also fail some functional tests, but pass acceptance testing because the customer is willing to deal with some functional bugs as long as the software does the core things they need acceptably well (beta software will often be accepted by a subset of users before it is completely functional).
Acceptance testing is just testing carried out by the client, and includes other kinds of testing:
For functional testing vs non-functional testing (their subtypes) - see my answer to this SO question.
In my world, we use the terms as follows:
functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?
For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.
acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?
This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":
Reliability, Availability: Validated via a stress test.
Scalability: Validated via a load test.
Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?
Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.
Maintainability: Validated via demonstration of how we will deliver software updates/patches.
Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.
This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.
I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.
Test level is easy to explain using V-model, an example:
Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.
A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.
To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.
The answer is opinion. I worked in a lot of projects and being testmanager and issuemanager and all different roles and the descriptions in various books differ so here is my variation:
functional-testing: take the business requirements and test all of it good and thorougly from a functional viewpoint.
acceptance-testing: the "paying" customer does the testing he likes to do so that he can accept the product delivered. It depends on the customer but usually the tests are not as thorough as the functional-testing especially if it is an in-house project because the stakeholders review and trust the test results done in earlier test phases.
As I said this is my viewpoint and experience. The functional-testing is systematic and the acceptance-testing is rather the business department testing the thing.