Recently, someone mentioned “face validity”, and I nodded along, but had no idea what face validity meant versus just plain old validity. I was happy enough knowing the difference between validity and reliability. I probably read this somewhere during my coursework days. It turns out that there are seven types of validity, and more I may have forgotten.
This examines that an instrument measures what it is proclaiming to measure, and that participants understand this. It is not always necessary to demonstrate this, but can be useful to know how people react to a survey or questionnaire for repeating and possibly modifying the research.
Showing that the instrument is in line with the theoretical basis of the research provides content validity. If you have more than one theory, this can be easier to demonstrate, but makes for a harder discussion in tying your results back to theory for the reason of selecting the appropriate theory or theories to discuss.
Confirming that the instrument has demonstrated what it was designed to measure and what was actually measured gives you a picture of construct validity.
Show that clearly measurable (empirical vs. theoretical) question items can be reproduced across groups. If the questions are statistically reliable, you have factorial validity.
This measure exhibits that the scores of validity are consistent, in either a positive or negative direction, with other instruments founded in similar theory.
This demonstrates that the factorial validity is not confused with construct validity. In other words, make sure that the theoretical basis for the questions is not confused to achieve discriminant validity.
Verify that the evidence differs between two greatly different collections of survey respondents to report characteristics of various groups.
As you can see, with all the measures of validity to consider, it is a good idea to use an existing survey instrument so you know that this has all been considered and completed by someone else!