United Yacht Sales Logo

  • Search Used Yachts For Sale
  • Search Boats By Brand
  • Search Boats By Type
  • Search By Location
  • Search By Price
  • What's My Boat Worth?
  • Search Boats Just Listed
  • Small Yachts
  • Custom Sport Fishing Boats
  • Finance A Boat
  • Amer Yachts
  • Aquitalia Yachts
  • Cabo Yachts
  • Century Boats
  • French Yachts
  • Gulfstream Yachts
  • Hatteras Yachts
  • Shelter Island Yachts
  • Solaris Yachts
  • Sunpower Yachts
  • Sunreef Yachts
  • Vela Boatworks
  • Virtus Yachts
  • Why List With United?
  • Why Own A Boat Or Yacht?
  • Custom Website For Your Yacht
  • United Sold Boats
  • Buy A Yacht With Crypto
  • Find a Yacht Broker Near Me
  • Search For Broker By Name
  • Meet The United Support Team
  • Our History
  • Fort Lauderdale Boat Show
  • Stuart Boat Show
  • Miami Boat Show
  • Palm Beach Boat Show
  • Other Boat Shows
  • Yachting News
  • Yacht Closing Services
  • River Forest Yachting Centers

close box

Search All Yachts

2023 Cruisers Yachts 38' A License To Chill

A License To Chill is a 2023 Cruisers Yachts 38' 38 GLS OB listed for sale with United Yacht Broker John Blumenthal. John can be reached at 1-772-215-2571 to answer any questions you may have on this boat. United Yacht Sales is a professional yacht brokerage firm that has experience listing and selling all types of Cruisers Yachts and similar boats. With over 250 yacht brokers worldwide, we have the largest network of boat buyers and sellers in the industry.

The 38 GLS's innovative design features everything you love about the Cantius series. Still, with triple Mercury 4.6 L Verados V8 300 hp motors, it's very fast and smooth and quiet and dependable trouble-free motors. You can just increase your swimming area by lowering the starboard side balcony of the 38 GLS to convert it into a swim platform with the push of a button. The lower cabin features an aft stateroom and a U-shaped dinette that converts into a berth. A full galley and standing head with shower complete this extraordinary yacht. Large bow rider area with a varnished table that drops down to make a sun lounger area in the bow area. And Sea-Dec mat floor covering on the entire floor for cushioned comfort and a non-skid deck that looks great. Storage covers for all seats to keep the 38 GLS in great shape when stored. Bow cover can be used underway to protect from wind in the cockpit. The big sunroof in a fiberglass hard top with lighting and speakers in the hard top.

Specifications

  • Price USD: $ 695,000

Cruisers Yachts

Jupiter, florida, united states, power yacht.

  • LOA: 38 ft in
  • Display Length: 38 ft
  • Beam: 12' 6"
  • Water Capacity: 50 gals
  • Fuel Capacity: 335 gals
  • Engine Details: Mercury Verado TR-300 Verado 300 XX
  • Engine 1: 2023 100.00 HRS 300.00 HP
  • Engine 2: 2023 100.00 HRS 300.00 HP
  • Engine 3: 2023 100.00 HRS 300.00 HP
  • Engine Fuel: Gas/Petrol
  • Days on Market: INQUIRE

+ Manufacturer Provided Description

The 38 GLS’s innovative design features everything you love about the Cantius series but with triple Mercury Verados. Expand your swimming area by lowering the beach door to convert it into a swim platform. The lower cabin features an aft stateroom and U-Shaped dinette that converts into a berth. A full galley and standing head with shower complete this extraordinary yacht.

+ Vessel Walkthrough

  • Grey SeaDek Flooring Throughout
  • Port Side L-Shape Seating with Table
  • Stbd. Side Seating with Table
  • Electric Grill
  • Serving Countertop with Sink
  • 2 Hard Mounted Stools
  • Refrigerator
  • Ice Maker under Helm Seat
  • Canvas Retractable Sunroof
  • Walkthrough to Aft Bench Seating
  • Transom Shower
  • Concealed Swim Ladder
  • Black Covers for All Seating, Tables, Grill and Helm Area
  • Stereo Speakers
  • Beach Door is a Highlight!
  • Cockpit Retractable Sure Shade
  • Overhead Lighting in Hardtop
  • Walkthrough to Bow Seating Arrangement
  • Black Cover for Seating Protection
  • Bolster Helm Seat
  • Seat Folds Out (Hi/Lo)
  • Custom Tilt Steering Wheel
  • 2 x Simrad Nav Screens (Sounder/Plotter/Radar)
  • Sea Keeper Control
  • Mercury Joy Stick Control
  • Mercury Side by Side Control Levers
  • Smartphone Charging Station
  • Ritchie Compass
  • Stereo Head Control
  • Operating Panel with Press Button Controls
  • Cup Holders

Bow Arrangement:

  • U-Shape Seating
  • High Gloss Table with Cup Holders
  • SeaDek Flooring
  • Numerous Cup Holders
  • Remote Stereo Head
  • Under Seating Storage
  • Table Drops to make-up Sun Lounge
  • Full Protection Cover
  • Anchor Locker
  • Fresh Water Spigot for Washdown
  • Electric Windlass with Chain and Rode
  • Up/Down Depress Pads
  • Navigation Lights

Engine & Mechanical & Lazarette:

  • Triple 300 Verados
  • Aft Underwater Lights
  • Electric Lazarette Access Hatch
  • 6 KW Kohler Generator with Hardcover Sound Shield (Diesel)
  • Racor Filter
  • Gas Filters
  • Hydraulics for Beach Door
  • Main 12V Distribution Panel
  • 2 x Pro-Mariner 12V Charger
  • House and Engine Start Batteries
  • Sea Keeper 3
  • 3 x Power Steering Pumps for each Verado
  • Fire Buoy Suppression
  • Sliding Access Door with Screen
  • 3 x Steps Down into Cabin
  • Ebony Oak Finish Throughout Vessel
  • Port/Stbd. Seating
  • High Gloss Dinette Table
  • Overhead LED Lighting
  • Table Drops to make up Large Bed
  • Stereo Head with Speakers
  • Port/Stbd. Shelves
  • Under Seat Storage
  • Vitrigo Fridge
  • Flat Screen TV
  • Full Beam Aft Berth
  • Air Conditioning
  • Shelve and Cabinet Storage
  • Above Counter Designer Sink
  • Vanity with Mirror Doors
  • Ample Storage below Sink
  • 110V Outlet
  • Overhead Lighting
  • Ventilation Fan
  • Shower with Curtain
  • Electric Head

+ Mechanical Disclaimer

Engine and generator hours are as of the date of the original listing and are a representation of what the listing broker is told by the owner and/or actual reading of the engine hour meters. The broker cannot guarantee the true hours. It is the responsibility of the purchaser and/or his agent to verify engine hours, warranties implied or otherwise and major overhauls as well as all other representations noted on the listing brochure.

+ Disclaimer

The company offers the details of this vessel in good faith but cannot guarantee or warrant the accuracy of this information nor warrant the condition of the vessel. A buyer should instruct his agents, or his surveyors, to investigate such details as the buyer desires validated. This vessel is offered subject to prior sale, price change or withdrawal without notice.

+ Brokers Comments

The A LICENSE TO CHILL has recently come to market. The seller has decided to downsize. This vessel, being a 2023 model, still has current warranties.

This is a perfect opportunity to step into a warrantied vessel. 

Do not hesitate to call me for more details and schedule a showing.

Listing MLS by Yachtr.com

Interested In This Yacht?

Contact John Blumenthal to learn more!

ABOUT THIS YACHT FOR SALE

Our Cruisers Yachts listing is a great opportunity to purchase a 38' Bowrider for sale in Jupiter, Florida - United States. This Cruisers Yachts is currently listed for $695,000. For more information on this vessel or to schedule a showing, please contact United Yacht Sales broker John Blumenthal at 1-772-215-2571.

PROFESSIONAL YACHT BROKERAGE SERVICES

United is a professional yacht brokerage firm with over 200 yacht brokers in over 104 different locations worldwide. By listing your boat or yacht for sale with us, the entire team is immediately notified of your boat and begin working to match your yacht with a buyer. We have many examples where boats have sold through our network within days of being introduced to our team. With more than $1.3 billion in sales, there is no better firm than United to help with the listing and sale of your vessel. Find out what your current yacht is worth on today's market!

BUYING A YACHT WITH THE UNITED TEAM

The yacht MLS consists of thousands of available brokerage vessels from all over the world and in different conditions. Hiring an experienced yacht broker to help you find the perfect boat makes financial sense, as well as takes the stress out of the process. A United broker starts by listening to your needs, how you plan to use your boat, your potential boating locations, and your budget. We then go to work looking at all of the available yachts that fit your criteria, research their history, provide you with a clear picture of the market, and organizes the showings. We're with you every step of the way from survey to acceptance and our industry-leading support staff will make sure your closing goes smoothly.

RELATED YACHTS

cruisers yachts 38 gls for sale

TIME TRAVEL

60' Cruisers Yachts Cantius Sports Coupe 2017

Kirkland, Washington, United States

cruisers yachts 38 gls for sale

Miss Tracy Lane

60' Cruisers Yachts 60 Cantius Fly 2023

Panama City Beach, Florida, United States

cruisers yachts 38 gls for sale

56' Cruisers Yachts 560 Express 2011

Miami, Florida, United States

Inquire about this Yacht

cruisers yachts 38 gls for sale

Cruisers Yachts 38 GLS Outboard

The 38 GLS OB for sale at your local dealer combines the unmatched performance and entertainment capabilities of the 38 GLS with powerful, easy-to-maintain outboards. Expand your swimming area by lowering the side of the 38 GLS to convert it into a swim platform. The lower cabin features an aft stateroom and U-shaped dinette that converts into a berth. A full galley and standing head with shower complete this extraordinary yacht. It’s a match made in heaven for lovers of on-the-water fun.

Couple sitting on the beach door on a Cruisers Yachts 38 GLS OB

Expand your swimming area by lowering the side of the 38 GLS to convert it into a swim platform. The easy to access controls and safety mechanisms allow for endless family fun. The aft facing bench backrest can swivel to either face the cockpit or the beach door.

Outboard motors on a 38 GLS

The 38 GLS OB is powered by triple 300-450 Mercury Verados. The joystick piloting allows you to navigate with ease at a top speed of 65 mph with the triple 450 racing engines.

Cockpit and helm of a 38 GLS OB

The open-concept cockpit was designed with entertainment in mind. You can find endless seating options between the bow lounge, two mid-ship L-shaped dinettes, and an aft-facing bench. For alternative seating options, the aft-facing bench backrest can swivel to face the beach door.

Cruisers Yachts 38 GLS OB helm

Luxury Finishes

Cruisers Yachts incorporates numerous intricate details for superior finishes. 316L-grade stainless steel metal components enhance durability throughout the vessel, reinforcing key elements such as deck cleats, rail stanchions, arch legs, and sump and bilge pump foundations. Cutting-edge machines and hand-sewn techniques create high-quality upholstery. Top-quality materials are used for a classic and durable interior look. Exotic woods are meticulously selected, machine-sanded, and finished to withstand marine environments.

Cruisers Yachts 38 GLS OB cockpit galley

Cockpit Galley

The galley features a fiberglass inlay sink, fridge, bottle storage along with optional grill and TV. Continue the conversation while sitting on swivel bar stools at the raised wet bar.

Bow seating on a 38 GLS OB

Bow Seating

38 GLS OB lower salon table and seating

Lower Salon

Cruisers Yachts 38 GLS OB head shower

Aft Stateroom

Cruiser Yachts 38 GLS OB main deck plan

Specifications

38' / 11,58 m
40'3" / 12,27 m

12’ 6” / 3,8 m

12’ 8” / 3,86 m

38.5" / 0,98 m
24.5" / 0,62 m

Cruiser Yachts 38 GLS OB lower deck layout

335 gallons / 1268 L
50 gallons / 189,3 L
31 gallons / 117,3 L
23,916 lbs / 10848 kg

38 GLS OB on the water

View All Features

Get access to the full features list for the 38 GLS OB for a complete list of specifications, accommodations, and options.

  • Request Access

Find a Cruisers Yachts Dealer

cruisers

Walkthrough

38 GLS OB on the water

Cruisers Yachts 38 GLS OB

Request more information.

Fill out the form below to connect with a Cruisers Yachts specialist and access our downloadable brochures.

Explore the GLS SERIES

cruisers yachts 38 gls for sale

60 ft used yachts for sale

60 ft used yachts for sale

Search used yachts for sale worldwide from 51 to 60 feet. We offer a wide range of used yachts, including long range cruisers, motor yachts, trawlers, sportfish yachts, sailboats and more. Contact our yacht brokers for assistance.

Explore used yachts and boats for sale worldwide between 60-69 feet. Mid-size 60 foot yachts make great family boats and offer an array of entertainment options and deck space. Most 60 ft yachts feature 3-4 staterooms below deck, spacious cockpits and entertainment-filled aft decks, a flybridge option, well-appointed galleys, and sumptuous ...

Search used motor yachts for sale from 50 to 60 feet worldwide, including a range of Flybrige yachts, Trawlers, Sportfish, Express yachts & more!

Power Motor Yachts for sale A motor yacht refers to specific yachts that are larger than 40 feet, with a top length of around 90 feet, bridging the gap between small yachts and mega yachts or superyachts.

Find motor yachts for sale near you, including boat prices, photos, and more. Locate boat dealers and find your boat at Boat Trader!

Search used boats between 50 and 60 feet that are currently on the brokerage market. SI Yachts can help you find the perfect new or pre-owned boat for your needs.

Preowned sailboats for sale over 60 feet preowned sailboats for sale by owner.

Experience the epitome of nautical craftsmanship with FGI Yacht Group's distinguished collection of 60′ yachts for sale. Each vessel in our repertoire is a seamless fusion of advanced engineering, unparalleled luxury, and meticulous design. Tailored to meet the diverse preferences of our discerning clientele, these 60-foot yachts strike a perfect balance between spacious opulence and ...

Find Hatteras 60 Motor Yacht for sale in your area & across the world on YachtWorld. Offering the best selection of Hatteras boats to choose from.

Explore used Cruisers yachts & boats for sale from 50' to 60'. Crafted for those seeking the ultimate water adventure.

Explore Used Hatteras Yachts For Sale Ranging From 60′ to 70′ What sets Hatteras Yachts apart from the rest is their unwavering commitment to providing an unparalleled on-water experience. Whether you are seeking a sleek sportfish yacht for fishing adventures or a luxurious motor yacht for indulgent cruising, Hatteras Yachts offers a diverse portfolio of models to cater to every discerning ...

60 feet Trawlers for Sale Discover the world of luxury and adventure with our exquisite selection of 60-foot trawler yachts for sale. These magnificent vessels are designed to offer unparalleled comfort and style while navigating the open seas.

Experience the epitome of yachting excellence with our selection of 60-foot catamarans for sale. These magnificent vessels offer a harmonious blend of luxury and performance, providing an unforgettable cruising experience. Step aboard and discover spacious interiors adorned with elegant cabins, a stylish salon, and inviting social areas.

Find Cruisers Yachts 60 Cantius boats for sale in your area & across the world on YachtWorld. Offering the best selection of Cruisers Yachts to choose from.

Find a used yacht for sale between $400,000 and $500,000 with United's expert team of yacht brokers. We can help you find the perfect used boat on the market under $500k.

Omsk Oblast, Russia Offline Map For Travel & Navigation is a premium, very easy to use and fast mobile application. EasyNavi has developed the Omsk Oblast, Russia Offline Map For Travel & Navigation app to provide you with the world's best mobile offline map. OFFLINE MAPS: • Fully offline vector map with incredible zoom level! • Detailed and informative map - because it is based on ...

Hatteras Yachts blend technology and craftsmanship to create exceptional boats — explore used Hatteras Yachts for sale from 50-60 feet.

Houses and apartments for sale Omsk Oblast: Real estate listings Omsk Oblast for the purchase and sale by owners of houses, apartments or land.

The detailed road map represents one of many map types and styles available. Look at Omsk, Omsk Oblast, Western Siberia, Russia from different perspectives.

The division was unmanned, except for some pre-assigned officers, serving with the parent division ( 56th Training Motorised Rifle Division ).

validity and reliability in research example qualitative

Have a language expert improve your writing.

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Methodology

Reliability vs. Validity in Research | Difference, Types and Examples

Published on July 3, 2019 by Fiona Middleton . Revised on June 22, 2023.

Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method , technique. or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt

It’s important to consider reliability and validity when you are creating your research design , planning your methods, and writing up your results, especially in quantitative research . Failing to do so can lead to several types of research bias and seriously affect your work.

Reliability vs validity
Reliability Validity
What does it tell you? The extent to which the results can be reproduced when the research is repeated under the same conditions. The extent to which the results really measure what they are supposed to measure.
How is it assessed? By checking the consistency of results across time, across different observers, and across parts of the test itself. By checking how well the results correspond to established theories and other measures of the same concept.
How do they relate? A reliable measurement is not always valid: the results might be , but they’re not necessarily correct. A valid measurement is generally reliable: if a test produces accurate results, they should be reproducible.

Table of contents

Understanding reliability vs validity, how are reliability and validity assessed, how to ensure validity and reliability in your research, where to write about reliability and validity in a thesis, other interesting articles.

Reliability and validity are closely related, but they mean different things. A measurement can be reliable without being valid. However, if a measurement is valid, it is usually also reliable.

What is reliability?

Reliability refers to how consistently a method measures something. If the same result can be consistently achieved by using the same methods under the same circumstances, the measurement is considered reliable.

What is validity?

Validity refers to how accurately a method measures what it is intended to measure. If research has high validity, that means it produces results that correspond to real properties, characteristics, and variations in the physical or social world.

High reliability is one indicator that a measurement is valid. If a method is not reliable, it probably isn’t valid.

If the thermometer shows different temperatures each time, even though you have carefully controlled conditions to ensure the sample’s temperature stays the same, the thermometer is probably malfunctioning, and therefore its measurements are not valid.

However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may not accurately reflect the real situation.

Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

validity and reliability in research example qualitative

Reliability can be estimated by comparing different versions of the same measurement. Validity is harder to assess, but it can be estimated by comparing the results to other relevant data or theory. Methods of estimating reliability and validity are usually split up into different types.

Types of reliability

Different types of reliability can be estimated through various statistical methods.

Type of reliability What does it assess? Example
The consistency of a measure : do you get the same results when you repeat the measurement? A group of participants complete a designed to measure personality traits. If they repeat the questionnaire days, weeks or months apart and give the same answers, this indicates high test-retest reliability.
The consistency of a measure : do you get the same results when different people conduct the same measurement? Based on an assessment criteria checklist, five examiners submit substantially different results for the same student project. This indicates that the assessment checklist has low inter-rater reliability (for example, because the criteria are too subjective).
The consistency of : do you get the same results from different parts of a test that are designed to measure the same thing? You design a questionnaire to measure self-esteem. If you randomly split the results into two halves, there should be a between the two sets of results. If the two results are very different, this indicates low internal consistency.

Types of validity

The validity of a measurement can be estimated based on three main types of evidence. Each type can be evaluated through expert judgement or statistical methods.

Type of validity What does it assess? Example
The adherence of a measure to  of the concept being measured. A self-esteem questionnaire could be assessed by measuring other traits known or assumed to be related to the concept of self-esteem (such as social skills and ). Strong correlation between the scores for self-esteem and associated traits would indicate high construct validity.
The extent to which the measurement  of the concept being measured. A test that aims to measure a class of students’ level of Spanish contains reading, writing and speaking components, but no listening component.  Experts agree that listening comprehension is an essential aspect of language ability, so the test lacks content validity for measuring the overall level of ability in Spanish.
The extent to which the result of a measure corresponds to of the same concept. A is conducted to measure the political opinions of voters in a region. If the results accurately predict the later outcome of an election in that region, this indicates that the survey has high criterion validity.

To assess the validity of a cause-and-effect relationship, you also need to consider internal validity (the design of the experiment ) and external validity (the generalizability of the results).

The reliability and validity of your results depends on creating a strong research design , choosing appropriate methods and samples, and conducting the research carefully and consistently.

Ensuring validity

If you use scores or ratings to measure variations in something (such as psychological traits, levels of ability or physical properties), it’s important that your results reflect the real variations as accurately as possible. Validity should be considered in the very earliest stages of your research, when you decide how you will collect your data.

  • Choose appropriate methods of measurement

Ensure that your method and measurement technique are high quality and targeted to measure exactly what you want to know. They should be thoroughly researched and based on existing knowledge.

For example, to collect data on a personality trait, you could use a standardized questionnaire that is considered reliable and valid. If you develop your own questionnaire, it should be based on established theory or findings of previous studies, and the questions should be carefully and precisely worded.

  • Use appropriate sampling methods to select your subjects

To produce valid and generalizable results, clearly define the population you are researching (e.g., people from a specific age range, geographical location, or profession).  Ensure that you have enough participants and that they are representative of the population. Failing to do so can lead to sampling bias and selection bias .

Ensuring reliability

Reliability should be considered throughout the data collection process. When you use a tool or technique to collect data, it’s important that the results are precise, stable, and reproducible .

  • Apply your methods consistently

Plan your method carefully to make sure you carry out the same steps in the same way for each measurement. This is especially important if multiple researchers are involved.

For example, if you are conducting interviews or observations , clearly define how specific behaviors or responses will be counted, and make sure questions are phrased the same way each time. Failing to do so can lead to errors such as omitted variable bias or information bias .

  • Standardize the conditions of your research

When you collect your data, keep the circumstances as consistent as possible to reduce the influence of external factors that might create variation in the results.

For example, in an experimental setup, make sure all participants are given the same information and tested under the same conditions, preferably in a properly randomized setting. Failing to do so can lead to a placebo effect , Hawthorne effect , or other demand characteristics . If participants can guess the aims or objectives of a study, they may attempt to act in more socially desirable ways.

It’s appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper . Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.

Reliability and validity in a thesis
Section Discuss
What have other researchers done to devise and improve methods that are reliable and valid?
How did you plan your research to ensure reliability and validity of the measures used? This includes the chosen sample set and size, sample preparation, external conditions and measuring techniques.
If you calculate reliability and validity, state these values alongside your main results.
This is the moment to talk about how reliable and valid your results actually were. Were they consistent, and did they reflect true values? If not, why not?
If reliability and validity were a big problem for your findings, it might be helpful to mention this here.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Middleton, F. (2023, June 22). Reliability vs. Validity in Research | Difference, Types and Examples. Scribbr. Retrieved July 16, 2024, from https://www.scribbr.com/methodology/reliability-vs-validity/

Is this article helpful?

Fiona Middleton

Fiona Middleton

Other students also liked, what is quantitative research | definition, uses & methods, data collection | definition, methods & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Family Med Prim Care
  • v.4(3); Jul-Sep 2015

Validity, reliability, and generalizability in qualitative research

Lawrence leung.

1 Department of Family Medicine, Queen's University, Kingston, Ontario, Canada

2 Centre of Studies in Primary Care, Queen's University, Kingston, Ontario, Canada

In general practice, qualitative research contributes as significantly as quantitative research, in particular regarding psycho-social aspects of patient-care, health services provision, policy setting, and health administrations. In contrast to quantitative research, qualitative research as a whole has been constantly critiqued, if not disparaged, by the lack of consensus for assessing its quality and robustness. This article illustrates with five published studies how qualitative research can impact and reshape the discipline of primary care, spiraling out from clinic-based health screening to community-based disease monitoring, evaluation of out-of-hours triage services to provincial psychiatric care pathways model and finally, national legislation of core measures for children's healthcare insurance. Fundamental concepts of validity, reliability, and generalizability as applicable to qualitative research are then addressed with an update on the current views and controversies.

Nature of Qualitative Research versus Quantitative Research

The essence of qualitative research is to make sense of and recognize patterns among words in order to build up a meaningful picture without compromising its richness and dimensionality. Like quantitative research, the qualitative research aims to seek answers for questions of “how, where, when who and why” with a perspective to build a theory or refute an existing theory. Unlike quantitative research which deals primarily with numerical data and their statistical interpretations under a reductionist, logical and strictly objective paradigm, qualitative research handles nonnumerical information and their phenomenological interpretation, which inextricably tie in with human senses and subjectivity. While human emotions and perspectives from both subjects and researchers are considered undesirable biases confounding results in quantitative research, the same elements are considered essential and inevitable, if not treasurable, in qualitative research as they invariable add extra dimensions and colors to enrich the corpus of findings. However, the issue of subjectivity and contextual ramifications has fueled incessant controversies regarding yardsticks for quality and trustworthiness of qualitative research results for healthcare.

Impact of Qualitative Research upon Primary Care

In many ways, qualitative research contributes significantly, if not more so than quantitative research, to the field of primary care at various levels. Five qualitative studies are chosen to illustrate how various methodologies of qualitative research helped in advancing primary healthcare, from novel monitoring of chronic obstructive pulmonary disease (COPD) via mobile-health technology,[ 1 ] informed decision for colorectal cancer screening,[ 2 ] triaging out-of-hours GP services,[ 3 ] evaluating care pathways for community psychiatry[ 4 ] and finally prioritization of healthcare initiatives for legislation purposes at national levels.[ 5 ] With the recent advances of information technology and mobile connecting device, self-monitoring and management of chronic diseases via tele-health technology may seem beneficial to both the patient and healthcare provider. Recruiting COPD patients who were given tele-health devices that monitored lung functions, Williams et al. [ 1 ] conducted phone interviews and analyzed their transcripts via a grounded theory approach, identified themes which enabled them to conclude that such mobile-health setup and application helped to engage patients with better adherence to treatment and overall improvement in mood. Such positive findings were in contrast to previous studies, which opined that elderly patients were often challenged by operating computer tablets,[ 6 ] or, conversing with the tele-health software.[ 7 ] To explore the content of recommendations for colorectal cancer screening given out by family physicians, Wackerbarth, et al. [ 2 ] conducted semi-structure interviews with subsequent content analysis and found that most physicians delivered information to enrich patient knowledge with little regard to patients’ true understanding, ideas, and preferences in the matter. These findings suggested room for improvement for family physicians to better engage their patients in recommending preventative care. Faced with various models of out-of-hours triage services for GP consultations, Egbunike et al. [ 3 ] conducted thematic analysis on semi-structured telephone interviews with patients and doctors in various urban, rural and mixed settings. They found that the efficiency of triage services remained a prime concern from both users and providers, among issues of access to doctors and unfulfilled/mismatched expectations from users, which could arouse dissatisfaction and legal implications. In UK, a care pathways model for community psychiatry had been introduced but its benefits were unclear. Khandaker et al. [ 4 ] hence conducted a qualitative study using semi-structure interviews with medical staff and other stakeholders; adopting a grounded-theory approach, major themes emerged which included improved equality of access, more focused logistics, increased work throughput and better accountability for community psychiatry provided under the care pathway model. Finally, at the US national level, Mangione-Smith et al. [ 5 ] employed a modified Delphi method to gather consensus from a panel of nominators which were recognized experts and stakeholders in their disciplines, and identified a core set of quality measures for children's healthcare under the Medicaid and Children's Health Insurance Program. These core measures were made transparent for public opinion and later passed on for full legislation, hence illustrating the impact of qualitative research upon social welfare and policy improvement.

Overall Criteria for Quality in Qualitative Research

Given the diverse genera and forms of qualitative research, there is no consensus for assessing any piece of qualitative research work. Various approaches have been suggested, the two leading schools of thoughts being the school of Dixon-Woods et al. [ 8 ] which emphasizes on methodology, and that of Lincoln et al. [ 9 ] which stresses the rigor of interpretation of results. By identifying commonalities of qualitative research, Dixon-Woods produced a checklist of questions for assessing clarity and appropriateness of the research question; the description and appropriateness for sampling, data collection and data analysis; levels of support and evidence for claims; coherence between data, interpretation and conclusions, and finally level of contribution of the paper. These criteria foster the 10 questions for the Critical Appraisal Skills Program checklist for qualitative studies.[ 10 ] However, these methodology-weighted criteria may not do justice to qualitative studies that differ in epistemological and philosophical paradigms,[ 11 , 12 ] one classic example will be positivistic versus interpretivistic.[ 13 ] Equally, without a robust methodological layout, rigorous interpretation of results advocated by Lincoln et al. [ 9 ] will not be good either. Meyrick[ 14 ] argued from a different angle and proposed fulfillment of the dual core criteria of “transparency” and “systematicity” for good quality qualitative research. In brief, every step of the research logistics (from theory formation, design of study, sampling, data acquisition and analysis to results and conclusions) has to be validated if it is transparent or systematic enough. In this manner, both the research process and results can be assured of high rigor and robustness.[ 14 ] Finally, Kitto et al. [ 15 ] epitomized six criteria for assessing overall quality of qualitative research: (i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability, which also double as evaluative landmarks for manuscript review to the Medical Journal of Australia. Same for quantitative research, quality for qualitative research can be assessed in terms of validity, reliability, and generalizability.

Validity in qualitative research means “appropriateness” of the tools, processes, and data. Whether the research question is valid for the desired outcome, the choice of methodology is appropriate for answering the research question, the design is valid for the methodology, the sampling and data analysis is appropriate, and finally the results and conclusions are valid for the sample and context. In assessing validity of qualitative research, the challenge can start from the ontology and epistemology of the issue being studied, e.g. the concept of “individual” is seen differently between humanistic and positive psychologists due to differing philosophical perspectives:[ 16 ] Where humanistic psychologists believe “individual” is a product of existential awareness and social interaction, positive psychologists think the “individual” exists side-by-side with formation of any human being. Set off in different pathways, qualitative research regarding the individual's wellbeing will be concluded with varying validity. Choice of methodology must enable detection of findings/phenomena in the appropriate context for it to be valid, with due regard to culturally and contextually variable. For sampling, procedures and methods must be appropriate for the research paradigm and be distinctive between systematic,[ 17 ] purposeful[ 18 ] or theoretical (adaptive) sampling[ 19 , 20 ] where the systematic sampling has no a priori theory, purposeful sampling often has a certain aim or framework and theoretical sampling is molded by the ongoing process of data collection and theory in evolution. For data extraction and analysis, several methods were adopted to enhance validity, including 1 st tier triangulation (of researchers) and 2 nd tier triangulation (of resources and theories),[ 17 , 21 ] well-documented audit trail of materials and processes,[ 22 , 23 , 24 ] multidimensional analysis as concept- or case-orientated[ 25 , 26 ] and respondent verification.[ 21 , 27 ]

Reliability

In quantitative research, reliability refers to exact replicability of the processes and the results. In qualitative research with diverse paradigms, such definition of reliability is challenging and epistemologically counter-intuitive. Hence, the essence of reliability for qualitative research lies with consistency.[ 24 , 28 ] A margin of variability for results is tolerated in qualitative research provided the methodology and epistemological logistics consistently yield data that are ontologically similar but may differ in richness and ambience within similar dimensions. Silverman[ 29 ] proposed five approaches in enhancing the reliability of process and results: Refutational analysis, constant data comparison, comprehensive data use, inclusive of the deviant case and use of tables. As data were extracted from the original sources, researchers must verify their accuracy in terms of form and context with constant comparison,[ 27 ] either alone or with peers (a form of triangulation).[ 30 ] The scope and analysis of data included should be as comprehensive and inclusive with reference to quantitative aspects if possible.[ 30 ] Adopting the Popperian dictum of falsifiability as essence of truth and science, attempted to refute the qualitative data and analytes should be performed to assess reliability.[ 31 ]

Generalizability

Most qualitative research studies, if not all, are meant to study a specific issue or phenomenon in a certain population or ethnic group, of a focused locality in a particular context, hence generalizability of qualitative research findings is usually not an expected attribute. However, with rising trend of knowledge synthesis from qualitative research via meta-synthesis, meta-narrative or meta-ethnography, evaluation of generalizability becomes pertinent. A pragmatic approach to assessing generalizability for qualitative studies is to adopt same criteria for validity: That is, use of systematic sampling, triangulation and constant comparison, proper audit and documentation, and multi-dimensional theory.[ 17 ] However, some researchers espouse the approach of analytical generalization[ 32 ] where one judges the extent to which the findings in one study can be generalized to another under similar theoretical, and the proximal similarity model, where generalizability of one study to another is judged by similarities between the time, place, people and other social contexts.[ 33 ] Thus said, Zimmer[ 34 ] questioned the suitability of meta-synthesis in view of the basic tenets of grounded theory,[ 35 ] phenomenology[ 36 ] and ethnography.[ 37 ] He concluded that any valid meta-synthesis must retain the other two goals of theory development and higher-level abstraction while in search of generalizability, and must be executed as a third level interpretation using Gadamer's concepts of the hermeneutic circle,[ 38 , 39 ] dialogic process[ 38 ] and fusion of horizons.[ 39 ] Finally, Toye et al. [ 40 ] reported the practicality of using “conceptual clarity” and “interpretative rigor” as intuitive criteria for assessing quality in meta-ethnography, which somehow echoed Rolfe's controversial aesthetic theory of research reports.[ 41 ]

Food for Thought

Despite various measures to enhance or ensure quality of qualitative studies, some researchers opined from a purist ontological and epistemological angle that qualitative research is not a unified, but ipso facto diverse field,[ 8 ] hence any attempt to synthesize or appraise different studies under one system is impossible and conceptually wrong. Barbour argued from a philosophical angle that these special measures or “technical fixes” (like purposive sampling, multiple-coding, triangulation, and respondent validation) can never confer the rigor as conceived.[ 11 ] In extremis, Rolfe et al. opined from the field of nursing research, that any set of formal criteria used to judge the quality of qualitative research are futile and without validity, and suggested that any qualitative report should be judged by the form it is written (aesthetic) and not by the contents (epistemic).[ 41 ] Rolfe's novel view is rebutted by Porter,[ 42 ] who argued via logical premises that two of Rolfe's fundamental statements were flawed: (i) “The content of research report is determined by their forms” may not be a fact, and (ii) that research appraisal being “subject to individual judgment based on insight and experience” will mean those without sufficient experience of performing research will be unable to judge adequately – hence an elitist's principle. From a realism standpoint, Porter then proposes multiple and open approaches for validity in qualitative research that incorporate parallel perspectives[ 43 , 44 ] and diversification of meanings.[ 44 ] Any work of qualitative research, when read by the readers, is always a two-way interactive process, such that validity and quality has to be judged by the receiving end too and not by the researcher end alone.

In summary, the three gold criteria of validity, reliability and generalizability apply in principle to assess quality for both quantitative and qualitative research, what differs will be the nature and type of processes that ontologically and epistemologically distinguish between the two.

Source of Support: Nil.

Conflict of Interest: None declared.

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Reliability vs Validity in Research | Differences, Types & Examples

Reliability vs Validity in Research | Differences, Types & Examples

Published on 3 May 2022 by Fiona Middleton . Revised on 10 October 2022.

Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method , technique, or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.

It’s important to consider reliability and validity when you are creating your research design , planning your methods, and writing up your results, especially in quantitative research .

Reliability vs validity
Reliability Validity
What does it tell you? The extent to which the results can be reproduced when the research is repeated under the same conditions. The extent to which the results really measure what they are supposed to measure.
How is it assessed? By checking the consistency of results across time, across different observers, and across parts of the test itself. By checking how well the results correspond to established theories and other measures of the same concept.
How do they relate? A reliable measurement is not always valid: the results might be reproducible, but they’re not necessarily correct. A valid measurement is generally reliable: if a test produces accurate results, they should be .

Understanding reliability vs validity, how are reliability and validity assessed, how to ensure validity and reliability in your research, where to write about reliability and validity in a thesis.

Validity is harder to assess than reliability, but it is even more important. To obtain useful results, the methods you use to collect your data must be valid: the research must be measuring what it claims to measure. This ensures that your discussion of the data and the conclusions you draw are also valid.

Prevent plagiarism, run a free check.

Type of reliability What does it assess? Example
The consistency of a measure : do you get the same results when you repeat the measurement? A group of participants complete a designed to measure personality traits. If they repeat the questionnaire days, weeks, or months apart and give the same answers, this indicates high test-retest reliability.
The consistency of a measure : do you get the same results when different people conduct the same measurement? Based on an assessment criteria checklist, five examiners submit substantially different results for the same student project. This indicates that the assessment checklist has low inter-rater reliability (for example, because the criteria are too subjective).
The consistency of : do you get the same results from different parts of a test that are designed to measure the same thing? You design a questionnaire to measure self-esteem. If you randomly split the results into two halves, there should be a between the two sets of results. If the two results are very different, this indicates low internal consistency.
Type of validity What does it assess? Example
The adherence of a measure to  of the concept being measured. A self-esteem questionnaire could be assessed by measuring other traits known or assumed to be related to the concept of self-esteem (such as social skills and optimism). Strong correlation between the scores for self-esteem and associated traits would indicate high construct validity.
The extent to which the measurement  of the concept being measured. A test that aims to measure a class of students’ level of Spanish contains reading, writing, and speaking components, but no listening component.  Experts agree that listening comprehension is an essential aspect of language ability, so the test lacks content validity for measuring the overall level of ability in Spanish.
The extent to which the result of a measure corresponds to of the same concept. A is conducted to measure the political opinions of voters in a region. If the results accurately predict the later outcome of an election in that region, this indicates that the survey has high criterion validity.

To assess the validity of a cause-and-effect relationship, you also need to consider internal validity (the design of the experiment ) and external validity (the generalisability of the results).

If you use scores or ratings to measure variations in something (such as psychological traits, levels of ability, or physical properties), it’s important that your results reflect the real variations as accurately as possible. Validity should be considered in the very earliest stages of your research, when you decide how you will collect your data .

Ensure that your method and measurement technique are of high quality and targeted to measure exactly what you want to know. They should be thoroughly researched and based on existing knowledge.

For example, to collect data on a personality trait, you could use a standardised questionnaire that is considered reliable and valid. If you develop your own questionnaire, it should be based on established theory or the findings of previous studies, and the questions should be carefully and precisely worded.

To produce valid generalisable results, clearly define the population you are researching (e.g., people from a specific age range, geographical location, or profession). Ensure that you have enough participants and that they are representative of the population.

Reliability should be considered throughout the data collection process. When you use a tool or technique to collect data, it’s important that the results are precise, stable, and reproducible.

For example, if you are conducting interviews or observations, clearly define how specific behaviours or responses will be counted, and make sure questions are phrased the same way each time.

  • Standardise the conditions of your research

For example, in an experimental setup, make sure all participants are given the same information and tested under the same conditions.

It’s appropriate to discuss reliability and validity in various sections of your thesis or dissertation or research paper. Showing that you have taken them into account in planning your research and interpreting the results makes your work more credible and trustworthy.

Reliability and validity in a thesis
Section Discuss
What have other researchers done to devise and improve methods that are reliable and valid?
How did you plan your research to ensure reliability and validity of the measures used? This includes the chosen sample set and size, sample preparation, external conditions, and measuring techniques.
If you calculate reliability and validity, state these values alongside your main results.
This is the moment to talk about how reliable and valid your results actually were. Were they consistent, and did they reflect true values? If not, why not?
If reliability and validity were a big problem for your findings, it might be helpful to mention this here.

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Middleton, F. (2022, October 10). Reliability vs Validity in Research | Differences, Types & Examples. Scribbr. Retrieved 17 July 2024, from https://www.scribbr.co.uk/research-methods/reliability-or-validity/

Other students also liked, the 4 types of validity | types, definitions & examples, a quick guide to experimental design | 5 steps & examples, sampling methods | types, techniques, & examples.

validity and reliability in research example qualitative

Validity & Reliability In Research

A Plain-Language Explanation (With Examples)

By: Derek Jansen (MBA) | Expert Reviewer: Kerryn Warren (PhD) | September 2023

Validity and reliability are two related but distinctly different concepts within research. Understanding what they are and how to achieve them is critically important to any research project. In this post, we’ll unpack these two concepts as simply as possible.

This post is based on our popular online course, Research Methodology Bootcamp . In the course, we unpack the basics of methodology  using straightfoward language and loads of examples. If you’re new to academic research, you definitely want to use this link to get 50% off the course (limited-time offer).

Overview: Validity & Reliability

  • The big picture
  • Validity 101
  • Reliability 101 
  • Key takeaways

First, The Basics…

First, let’s start with a big-picture view and then we can zoom in to the finer details.

Validity and reliability are two incredibly important concepts in research, especially within the social sciences. Both validity and reliability have to do with the measurement of variables and/or constructs – for example, job satisfaction, intelligence, productivity, etc. When undertaking research, you’ll often want to measure these types of constructs and variables and, at the simplest level, validity and reliability are about ensuring the quality and accuracy of those measurements .

As you can probably imagine, if your measurements aren’t accurate or there are quality issues at play when you’re collecting your data, your entire study will be at risk. Therefore, validity and reliability are very important concepts to understand (and to get right). So, let’s unpack each of them.

Free Webinar: Research Methodology 101

What Is Validity?

In simple terms, validity (also called “construct validity”) is all about whether a research instrument accurately measures what it’s supposed to measure .

For example, let’s say you have a set of Likert scales that are supposed to quantify someone’s level of overall job satisfaction. If this set of scales focused purely on only one dimension of job satisfaction, say pay satisfaction, this would not be a valid measurement, as it only captures one aspect of the multidimensional construct. In other words, pay satisfaction alone is only one contributing factor toward overall job satisfaction, and therefore it’s not a valid way to measure someone’s job satisfaction.

validity and reliability in research example qualitative

Oftentimes in quantitative studies, the way in which the researcher or survey designer interprets a question or statement can differ from how the study participants interpret it . Given that respondents don’t have the opportunity to ask clarifying questions when taking a survey, it’s easy for these sorts of misunderstandings to crop up. Naturally, if the respondents are interpreting the question in the wrong way, the data they provide will be pretty useless . Therefore, ensuring that a study’s measurement instruments are valid – in other words, that they are measuring what they intend to measure – is incredibly important.

There are various types of validity and we’re not going to go down that rabbit hole in this post, but it’s worth quickly highlighting the importance of making sure that your research instrument is tightly aligned with the theoretical construct you’re trying to measure .  In other words, you need to pay careful attention to how the key theories within your study define the thing you’re trying to measure – and then make sure that your survey presents it in the same way.

For example, sticking with the “job satisfaction” construct we looked at earlier, you’d need to clearly define what you mean by job satisfaction within your study (and this definition would of course need to be underpinned by the relevant theory). You’d then need to make sure that your chosen definition is reflected in the types of questions or scales you’re using in your survey . Simply put, you need to make sure that your survey respondents are perceiving your key constructs in the same way you are. Or, even if they’re not, that your measurement instrument is capturing the necessary information that reflects your definition of the construct at hand.

If all of this talk about constructs sounds a bit fluffy, be sure to check out Research Methodology Bootcamp , which will provide you with a rock-solid foundational understanding of all things methodology-related. Remember, you can take advantage of our 60% discount offer using this link.

Need a helping hand?

validity and reliability in research example qualitative

What Is Reliability?

As with validity, reliability is an attribute of a measurement instrument – for example, a survey, a weight scale or even a blood pressure monitor. But while validity is concerned with whether the instrument is measuring the “thing” it’s supposed to be measuring, reliability is concerned with consistency and stability . In other words, reliability reflects the degree to which a measurement instrument produces consistent results when applied repeatedly to the same phenomenon , under the same conditions .

As you can probably imagine, a measurement instrument that achieves a high level of consistency is naturally more dependable (or reliable) than one that doesn’t – in other words, it can be trusted to provide consistent measurements . And that, of course, is what you want when undertaking empirical research. If you think about it within a more domestic context, just imagine if you found that your bathroom scale gave you a different number every time you hopped on and off of it – you wouldn’t feel too confident in its ability to measure the variable that is your body weight 🙂

It’s worth mentioning that reliability also extends to the person using the measurement instrument . For example, if two researchers use the same instrument (let’s say a measuring tape) and they get different measurements, there’s likely an issue in terms of how one (or both) of them are using the measuring tape. So, when you think about reliability, consider both the instrument and the researcher as part of the equation.

As with validity, there are various types of reliability and various tests that can be used to assess the reliability of an instrument. A popular one that you’ll likely come across for survey instruments is Cronbach’s alpha , which is a statistical measure that quantifies the degree to which items within an instrument (for example, a set of Likert scales) measure the same underlying construct . In other words, Cronbach’s alpha indicates how closely related the items are and whether they consistently capture the same concept . 

Reliability reflects whether an instrument produces consistent results when applied to the same phenomenon, under the same conditions.

Recap: Key Takeaways

Alright, let’s quickly recap to cement your understanding of validity and reliability:

  • Validity is concerned with whether an instrument (e.g., a set of Likert scales) is measuring what it’s supposed to measure
  • Reliability is concerned with whether that measurement is consistent and stable when measuring the same phenomenon under the same conditions.

In short, validity and reliability are both essential to ensuring that your data collection efforts deliver high-quality, accurate data that help you answer your research questions . So, be sure to always pay careful attention to the validity and reliability of your measurement instruments when collecting and analysing data. As the adage goes, “rubbish in, rubbish out” – make sure that your data inputs are rock-solid.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Methodology Bootcamp . If you want to work smart, you don't want to miss this .

Kennedy Sinkamba

THE MATERIAL IS WONDERFUL AND BENEFICIAL TO ALL STUDENTS.

THE MATERIAL IS WONDERFUL AND BENEFICIAL TO ALL STUDENTS AND I HAVE GREATLY BENEFITED FROM THE CONTENT.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Qualitative Researcher Dr Kriukow

Articles and blog posts

Validity and reliability in qualitative research.

validity and reliability in research example qualitative

What is Validity and Reliability in Qualitative research?

In Quantitative research, reliability refers to consistency of certain measurements, and validity – to whether these measurements “measure what they are supposed to measure”. Things are slightly different, however, in Qualitative research.

Reliability in qualitative studies is mostly a matter of “being thorough, careful and honest in carrying out the research” (Robson, 2002: 176). In qualitative interviews, this issue relates to a number of practical aspects of the process of interviewing, including the wording of interview questions, establishing rapport with the interviewees and considering ‘power relationship’ between the interviewer and the participant (e.g. Breakwell, 2000; Cohen et al., 2007; Silverman, 1993).

What seems more relevant when discussing qualitative studies is their validity , which very often is being addressed with regard to three common threats to validity in qualitative studies, namely researcher bias , reactivity and respondent bias (Lincoln and Guba, 1985).

Researcher bias refers to any kind of negative influence of the researcher’s knowledge, or assumptions, of the study, including the influence of his or her assumptions of the design, analysis or, even, sampling strategy. Reactivity , in turn, refers to a possible influence of the researcher himself/herself on the studied situation and people. Respondent bias refers to a situation where respondents do not provide honest responses for any reason, which may include them perceiving a given topic as a threat, or them being willing to ‘please’ the researcher with responses they believe are desirable.

Robson (2002) suggested a number of strategies aimed at addressing these threats to validity, being prolonged involvement , triangulation , peer debriefing , member checking ,  negative case analysis  and keeping an audit trail .

threats to validity.png

So, what exactly are these strategies and how can you apply them in your research?

Prolonged involvement refers to the length of time of the researcher’s involvement in the study, including involvement with the environment and the studied participants. It may be granted, for example, by the duration of the study, or by the researcher belonging to the studied community (e.g. a student investigating other students’ experiences). Being a member of this community, or even being a friend to your participants (see my blog post on the ethics of researching friends ), may be a great advantage and a factor that both increases the level of trust between you, the researcher, and the participants and the possible threats of reactivity and respondent bias. It may, however, pose a threat in the form of researcher bias that stems from your, and the participants’, possible assumptions of similarity and presuppositions about some shared experiences (thus, for example, they will not say something in the interview because they will assume that both of you know it anyway – this way, you may miss some valuable data for your study).

Triangulation may refer to triangulation of data through utilising different instruments of data collection, methodological triangulation through employing mixed methods approach and theory triangulation through comparing different theories and perspectives with your own developing “theory” or through drawing from a number of different fields of study.

Peer debriefing and support is really an element of your student experience at the university throughout the process of the study. Various opportunities to present and discuss your research at its different stages, either at internally organised events at your university (e.g. student presentations, workshops, etc.) or at external conferences (which I strongly suggest that you start attending) will provide you with valuable feedback, criticism and suggestions for improvement. These events are invaluable in helping you to asses the study from a more objective, and critical, perspective and to recognise and address its limitations. This input, thus, from other people helps to reduce the researcher bias.

Member checking , or testing the emerging findings with the research participants, in order to increase the validity of the findings, may take various forms in your study. It may involve, for example, regular contact with the participants throughout the period of the data collection and analysis and verifying certain interpretations and themes resulting from the analysis of the data (Curtin and Fossey, 2007). As a way of controlling the influence of your knowledge and assumptions on the emerging interpretations, if you are not clear about something a participant had said, or written, you may send him/her a request to verify either what he/she meant or the interpretation you made based on that. Secondly, it is common to have a follow-up, “validation interview” that is, in itself, a tool for validating your findings and verifying whether they could be applied to individual participants (Buchbinder, 2011), in order to determine outlying, or negative, cases and to re-evaluate your understanding of a given concept (see further below). Finally, member checking, in its most commonly adopted form, may be carried out by sending the interview transcripts to the participants and asking them to read them and provide any necessary comments or corrections (Carlson, 2010).

Negative case analysis is a process of analysing ‘cases’, or sets of data collected from a single participant, that do not match the patterns emerging from the rest of the data. Whenever an emerging explanation of a given phenomenon you are investigating does nto seem applicable to one, or a small number, of the participants, you should try to carry out a new line of analysis aimed at understanding the source of this discrepancy. Although you may be tempted to ignore these “cases” in fear of having to do extra work, it should become your habit to explore them in detail, as the strategy of negative case analysis, especially when combined with member checking, is a valuable way of reducing researcher bias.

Finally, the notion of keeping an audit trail refers to monitoring and keeping a record of all the research-related activities and data, including the raw interview and journal data, the audio-recordings, the researcher’s diary (see this post about recommended software for researcher’s diary ) and the coding book.

If you adopt the above strategies skilfully, you are likely to minimize threats to validity of your study. Don’t forget to look at the resources in the reference list, if you would like to read more on this topic!

Breakwell, G. M. (2000). Interviewing. In Breakwell, G.M., Hammond, S. & Fife-Shaw, C. (eds.) Research Methods in Psychology. 2nd Ed. London: Sage. Buchbinder, E. (2011). Beyond Checking: Experiences of the Validation Interview. Qualitative Social Work, 10 (1), 106-122. Carlson, J.A. (2010). Avoiding Traps in Member Checking. The Qualitative Report, 15 (5), 1102-1113. Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education. 6th Ed. London: Routledge. Curtin, M., & Fossey, E. (2007). Appraising the trustworthiness of qualitative studies: Guidelines for occupational therapists. Australian Occupational Therapy Journal, 54, 88-94. Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury Park, CA: SAGE. Robson, C. (2002). Real world research: a resource for social scientists and practitioner-researchers. Oxford, UK: Blackwell Publishers.

Silverman, D. (1993) Interpreting Qualitative Data. London: Sage.

Jarek Kriukow

There is an argument for using your identity and biases to enrich the research (see my recent blog… researcheridentity.wordpress.com) providing that the researcher seeks to fully comprehend their place in the research and is fully open, honest and clear about that in the write up. I have come to see reliability and validity more as a defence of is the research rigorous, thorough and careful therefore is it morally, ethically and accurately defensible?

' src=

Hi Nathan, thank you for your comment. I agree that being explicit about your own status and everything that you bring into the study is important – it’s a very similar issue (although seemingly it’s a different topic) to what I discussed in the blog post about grounded theory where I talked about being explicit about the influence of our previous knowledge on the data. I have also experienced this dilemma of “what to do with” my status as simultaneously a “researcher” an “insider” a “friend” and a “fellow Polish migrant” when conducting my PhD study of Polish migrants’ English Language Identity, and came to similar conclusions as the ones you reach in your article – to acknowledge these “multiple identities” and make the best of them.

I have read your blog article and really liked it – would you mind if I shared it on my Facebook page, and linked to it from my blog section on this page?

Please do share my blog by all means; I’d be delighted. Are you on twitter? I’m @Nathan_AHT_EDD I strongly believe that we cannot escape our past, including our multiple/present habitus and identities when it comes to qualitative educational research. It is therefore, arguably, logical to ethically and sensibly embrace it/them to enrich the data. Identities cannot be taken on and off like a coat, they are, “lived as deeply committed personal projects” (Clegg, 2008: p.336) and so if we embrace them we bring a unique insight into the process and have a genuine investment to make the research meaningful and worthy of notice.

Hi Nathan, I don’t have twitter… I know – somehow I still haven’t had time to get to grips with it. I do have Facebook, feel free to find me there. I also started to follow your blog so that I am notified about your content. I agree with what you said here and in your posts, and I like the topic of your blog. This is definitely something that we should pay more attention to when doing research. It would be interesting to talk some time and exchange opinions, as our research interests seem very closely related. Have a good day !

validity and reliability in research example qualitative

Validity vs. Reliability in Research: What's the Difference?

validity and reliability in research example qualitative

Introduction

What is the difference between reliability and validity in a study, what is an example of reliability and validity, how to ensure validity and reliability in your research, critiques of reliability and validity.

In research, validity and reliability are crucial for producing robust findings. They provide a foundation that assures scholars, practitioners, and readers alike that the research's insights are both accurate and consistent. However, the nuanced nature of qualitative data often blurs the lines between these concepts, making it imperative for researchers to discern their distinct roles.

This article seeks to illuminate the intricacies of reliability and validity, highlighting their significance and distinguishing their unique attributes. By understanding these critical facets, qualitative researchers can ensure their work not only resonates with authenticity but also trustworthiness.

validity and reliability in research example qualitative

In the domain of research, whether qualitative or quantitative , two concepts often arise when discussing the quality and rigor of a study: reliability and validity . These two terms, while interconnected, have distinct meanings that hold significant weight in the world of research.

Reliability, at its core, speaks to the consistency of a study. If a study or test measures the same concept repeatedly and yields the same results, it demonstrates a high degree of reliability. A common method for assessing reliability is through internal consistency reliability, which checks if multiple items that measure the same concept produce similar scores.

Another method often used is inter-rater reliability , which gauges the consistency of scores given by different raters. This approach is especially amenable to qualitative research , and it can help researchers assess the clarity of their code system and the consistency of their codings . For a study to be more dependable, it's imperative to ensure a sufficient measurement of reliability is achieved.

On the other hand, validity is concerned with accuracy. It looks at whether a study truly measures what it claims to. Within the realm of validity, several types exist. Construct validity, for instance, verifies that a study measures the intended abstract concept or underlying construct. If a research aims to measure self-esteem and accurately captures this abstract trait, it demonstrates strong construct validity.

Content validity ensures that a test or study comprehensively represents the entire domain of the concept it seeks to measure. For instance, if a test aims to assess mathematical ability, it should cover arithmetic, algebra, geometry, and more to showcase strong content validity.

Criterion validity is another form of validity that ensures that the scores from a test correlate well with a measure from a related outcome. A subset of this is predictive validity, which checks if the test can predict future outcomes. For instance, if an aptitude test can predict future job performance, it can be said to have high predictive validity.

The distinction between reliability and validity becomes clear when one considers the nature of their focus. While reliability is concerned with consistency and reproducibility, validity zeroes in on accuracy and truthfulness.

A research tool can be reliable without being valid. For instance, faulty instrument measures might consistently give bad readings (reliable but not valid). Conversely, in discussions about test reliability, the same test measure administered multiple times could sometimes hit the mark and at other times miss it entirely, producing different test scores each time. This would make it valid in some instances but not reliable.

For a study to be robust, it must achieve both reliability and validity. Reliability ensures the study's findings are reproducible while validity confirms that it accurately represents the phenomena it claims to. Ensuring both in a study means the results are both dependable and accurate, forming a cornerstone for high-quality research.

validity and reliability in research example qualitative

Efficient, easy data analysis with ATLAS.ti

Start analyzing data quickly and more deeply with ATLAS.ti. Download a free trial today.

Understanding the nuances of reliability and validity becomes clearer when contextualized within a real-world research setting. Imagine a qualitative study where a researcher aims to explore the experiences of teachers in urban schools concerning classroom management. The primary method of data collection is semi-structured interviews .

To ensure the reliability of this qualitative study, the researcher crafts a consistent list of open-ended questions for the interview. This ensures that, while each conversation might meander based on the individual’s experiences, there remains a core set of topics related to classroom management that every participant addresses.

The essence of reliability in this context isn't necessarily about garnering identical responses but rather about achieving a consistent approach to data collection and subsequent interpretation . As part of this commitment to reliability, two researchers might independently transcribe and analyze a subset of these interviews. If they identify similar themes and patterns in their independent analyses, it suggests a consistent interpretation of the data, showcasing inter-rater reliability .

Validity , on the other hand, is anchored in ensuring that the research genuinely captures and represents the lived experiences and sentiments of teachers concerning classroom management. To establish content validity, the list of interview questions is thoroughly reviewed by a panel of educational experts. Their feedback ensures that the questions encompass the breadth of issues and concerns related to classroom management in urban school settings.

As the interviews are conducted, the researcher pays close attention to the depth and authenticity of responses. After the interviews, member checking could be employed, where participants review the researcher's interpretation of their responses to ensure that their experiences and perspectives have been accurately captured. This strategy helps in affirming the study's construct validity, ensuring that the abstract concept of "experiences with classroom management" has been truthfully and adequately represented.

In this example, we can see that while the interview study is rooted in qualitative methods and subjective experiences, the principles of reliability and validity can still meaningfully inform the research process. They serve as guides to ensure the research's findings are both dependable and genuinely reflective of the participants' experiences.

Ensuring validity and reliability in research, irrespective of its qualitative or quantitative nature, is pivotal to producing results that are both trustworthy and robust. Here's how you can integrate these concepts into your study to ensure its rigor:

Reliability is about consistency. One of the most straightforward ways to gauge it in quantitative research is using test-retest reliability. It involves administering the same test to the same group of participants on two separate occasions and then comparing the results.

A high degree of similarity between the two sets of results indicates good reliability. This can often be measured using a correlation coefficient, where a value closer to 1 indicates a strong positive consistency between the two test iterations.

Validity, on the other hand, ensures that the research genuinely measures what it intends to. There are various forms of validity to consider. Convergent validity ensures that two measures of the same construct or those that should theoretically be related, are indeed correlated. For example, two different measures assessing self-esteem should show similar results for the same group, highlighting that they are measuring the same underlying construct.

Face validity is the most basic form of validity and is gauged by the sheer appearance of the measurement tool. If, at face value, a test seems like it measures what it claims to, it has face validity. This is often the first step and is usually followed by more rigorous forms of validity testing.

Criterion-related validity, a subtype of the previously discussed criterion validity, evaluates how well the outcomes of a particular test or measurement correlate with another related measure. For example, if a new tool is developed to measure reading comprehension, its results can be compared with those of an established reading comprehension test to assess its criterion-related validity. If the results show a strong correlation, it's a sign that the new tool is valid.

Ensuring both validity and reliability requires deliberate planning, meticulous testing, and constant reflection on the study's methods and results. This might involve using established scales or measures with proven validity and reliability, conducting pilot studies to refine measurement tools, and always staying cognizant of the fact that these two concepts are important considerations for research robustness.

While reliability and validity are foundational concepts in many traditional research paradigms, they have not escaped scrutiny, especially from critical and poststructuralist perspectives. These critiques often arise from the fundamental philosophical differences in how knowledge, truth, and reality are perceived and constructed.

From a poststructuralist viewpoint, the very pursuit of a singular "truth" or an objective reality is questionable. In such a perspective, multiple truths exist, each shaped by its own socio-cultural, historical, and individual contexts.

Reliability, with its emphasis on consistent replication, might then seem at odds with this understanding. If truths are multiple and shifting, how can consistency across repeated measures or observations be a valid measure of anything other than the research instrument's stability?

Validity, too, faces critique. In seeking to ensure that a study measures what it purports to measure, there's an implicit assumption of an observable, knowable reality. Poststructuralist critiques question this foundation, arguing that reality is too fluid, multifaceted, and influenced by power dynamics to be pinned down by any singular measurement or representation.

Moreover, the very act of determining "validity" often requires an external benchmark or "gold standard." This brings up the issue of who determines this standard and the power dynamics and potential biases inherent in such decisions.

Another point of contention is the way these concepts can inadvertently prioritize certain forms of knowledge over others. For instance, privileging research that meets stringent reliability and validity criteria might marginalize more exploratory, interpretive, or indigenous research methods. These methods, while offering deep insights, might not align neatly with traditional understandings of reliability and validity, potentially relegating them to the periphery of "accepted" knowledge production.

To be sure, reliability and validity serve as guiding principles in many research approaches. However, it's essential to recognize their limitations and the critiques posed by alternative epistemologies. Engaging with these critiques doesn't diminish the value of reliability and validity but rather enriches our understanding of the multifaceted nature of knowledge and the complexities of its pursuit.

validity and reliability in research example qualitative

A rigorous research process begins with ATLAS.ti

Download a free trial of our powerful data analysis software to make the most of your research.

validity and reliability in research example qualitative

  • How it works

researchprospect post subheader

Reliability and Validity – Definitions, Types & Examples

Published by Alvin Nicolas at August 16th, 2021 , Revised On October 26, 2023

A researcher must test the collected data before making any conclusion. Every  research design  needs to be concerned with reliability and validity to measure the quality of the research.

What is Reliability?

Reliability refers to the consistency of the measurement. Reliability shows how trustworthy is the score of the test. If the collected data shows the same results after being tested using various methods and sample groups, the information is reliable. If your method has reliability, the results will be valid.

Example: If you weigh yourself on a weighing scale throughout the day, you’ll get the same results. These are considered reliable results obtained through repeated measures.

Example: If a teacher conducts the same math test of students and repeats it next week with the same questions. If she gets the same score, then the reliability of the test is high.

What is the Validity?

Validity refers to the accuracy of the measurement. Validity shows how a specific test is suitable for a particular situation. If the results are accurate according to the researcher’s situation, explanation, and prediction, then the research is valid. 

If the method of measuring is accurate, then it’ll produce accurate results. If a method is reliable, then it’s valid. In contrast, if a method is not reliable, it’s not valid. 

Example:  Your weighing scale shows different results each time you weigh yourself within a day even after handling it carefully, and weighing before and after meals. Your weighing machine might be malfunctioning. It means your method had low reliability. Hence you are getting inaccurate or inconsistent results that are not valid.

Example:  Suppose a questionnaire is distributed among a group of people to check the quality of a skincare product and repeated the same questionnaire with many groups. If you get the same response from various participants, it means the validity of the questionnaire and product is high as it has high reliability.

Most of the time, validity is difficult to measure even though the process of measurement is reliable. It isn’t easy to interpret the real situation.

Example:  If the weighing scale shows the same result, let’s say 70 kg each time, even if your actual weight is 55 kg, then it means the weighing scale is malfunctioning. However, it was showing consistent results, but it cannot be considered as reliable. It means the method has low reliability.

Internal Vs. External Validity

One of the key features of randomised designs is that they have significantly high internal and external validity.

Internal validity  is the ability to draw a causal link between your treatment and the dependent variable of interest. It means the observed changes should be due to the experiment conducted, and any external factor should not influence the  variables .

Example: age, level, height, and grade.

External validity  is the ability to identify and generalise your study outcomes to the population at large. The relationship between the study’s situation and the situations outside the study is considered external validity.

Also, read about Inductive vs Deductive reasoning in this article.

Looking for reliable dissertation support?

We hear you.

  • Whether you want a full dissertation written or need help forming a dissertation proposal, we can help you with both.
  • Get different dissertation services at ResearchProspect and score amazing grades!

Threats to Interval Validity

Threat Definition Example
Confounding factors Unexpected events during the experiment that are not a part of treatment. If you feel the increased weight of your experiment participants is due to lack of physical activity, but it was actually due to the consumption of coffee with sugar.
Maturation The influence on the independent variable due to passage of time. During a long-term experiment, subjects may feel tired, bored, and hungry.
Testing The results of one test affect the results of another test. Participants of the first experiment may react differently during the second experiment.
Instrumentation Changes in the instrument’s collaboration Change in the   may give different results instead of the expected results.
Statistical regression Groups selected depending on the extreme scores are not as extreme on subsequent testing. Students who failed in the pre-final exam are likely to get passed in the final exams; they might be more confident and conscious than earlier.
Selection bias Choosing comparison groups without randomisation. A group of trained and efficient teachers is selected to teach children communication skills instead of randomly selecting them.
Experimental mortality Due to the extension of the time of the experiment, participants may leave the experiment. Due to multi-tasking and various competition levels, the participants may leave the competition because they are dissatisfied with the time-extension even if they were doing well.

Threats of External Validity

Threat Definition Example
Reactive/interactive effects of testing The participants of the pre-test may get awareness about the next experiment. The treatment may not be effective without the pre-test. Students who got failed in the pre-final exam are likely to get passed in the final exams; they might be more confident and conscious than earlier.
Selection of participants A group of participants selected with specific characteristics and the treatment of the experiment may work only on the participants possessing those characteristics If an experiment is conducted specifically on the health issues of pregnant women, the same treatment cannot be given to male participants.

How to Assess Reliability and Validity?

Reliability can be measured by comparing the consistency of the procedure and its results. There are various methods to measure validity and reliability. Reliability can be measured through  various statistical methods  depending on the types of validity, as explained below:

Types of Reliability

Type of reliability What does it measure? Example
Test-Retests It measures the consistency of the results at different points of time. It identifies whether the results are the same after repeated measures. Suppose a questionnaire is distributed among a group of people to check the quality of a skincare product and repeated the same questionnaire with many groups. If you get the same response from a various group of participants, it means the validity of the questionnaire and product is high as it has high test-retest reliability.
Inter-Rater It measures the consistency of the results at the same time by different raters (researchers) Suppose five researchers measure the academic performance of the same student by incorporating various questions from all the academic subjects and submit various results. It shows that the questionnaire has low inter-rater reliability.
Parallel Forms It measures Equivalence. It includes different forms of the same test performed on the same participants. Suppose the same researcher conducts the two different forms of tests on the same topic and the same students. The tests could be written and oral tests on the same topic. If results are the same, then the parallel-forms reliability of the test is high; otherwise, it’ll be low if the results are different.
Inter-Term It measures the consistency of the measurement. The results of the same tests are split into two halves and compared with each other. If there is a lot of difference in results, then the inter-term reliability of the test is low.

Types of Validity

As we discussed above, the reliability of the measurement alone cannot determine its validity. Validity is difficult to be measured even if the method is reliable. The following type of tests is conducted for measuring validity. 

Type of reliability What does it measure? Example
Content validity It shows whether all the aspects of the test/measurement are covered. A language test is designed to measure the writing and reading skills, listening, and speaking skills. It indicates that a test has high content validity.
Face validity It is about the validity of the appearance of a test or procedure of the test. The type of   included in the question paper, time, and marks allotted. The number of questions and their categories. Is it a good question paper to measure the academic performance of students?
Construct validity It shows whether the test is measuring the correct construct (ability/attribute, trait, skill) Is the test conducted to measure communication skills is actually measuring communication skills?
Criterion validity It shows whether the test scores obtained are similar to other measures of the same concept. The results obtained from a prefinal exam of graduate accurately predict the results of the later final exam. It shows that the test has high criterion validity.

Does your Research Methodology Have the Following?

  • Great Research/Sources
  • Perfect Language
  • Accurate Sources

If not, we can help. Our panel of experts makes sure to keep the 3 pillars of Research Methodology strong.

Does your Research Methodology Have the Following?

How to Increase Reliability?

  • Use an appropriate questionnaire to measure the competency level.
  • Ensure a consistent environment for participants
  • Make the participants familiar with the criteria of assessment.
  • Train the participants appropriately.
  • Analyse the research items regularly to avoid poor performance.

How to Increase Validity?

Ensuring Validity is also not an easy job. A proper functioning method to ensure validity is given below:

  • The reactivity should be minimised at the first concern.
  • The Hawthorne effect should be reduced.
  • The respondents should be motivated.
  • The intervals between the pre-test and post-test should not be lengthy.
  • Dropout rates should be avoided.
  • The inter-rater reliability should be ensured.
  • Control and experimental groups should be matched with each other.

How to Implement Reliability and Validity in your Thesis?

According to the experts, it is helpful if to implement the concept of reliability and Validity. Especially, in the thesis and the dissertation, these concepts are adopted much. The method for implementation given below:

Segments Explanation
All the planning about reliability and validity will be discussed here, including the chosen samples and size and the techniques used to measure reliability and validity.
Please talk about the level of reliability and validity of your results and their influence on values.
Discuss the contribution of other researchers to improve reliability and validity.

Frequently Asked Questions

What is reliability and validity in research.

Reliability in research refers to the consistency and stability of measurements or findings. Validity relates to the accuracy and truthfulness of results, measuring what the study intends to. Both are crucial for trustworthy and credible research outcomes.

Validity in research refers to the extent to which a study accurately measures what it intends to measure. It ensures that the results are truly representative of the phenomena under investigation. Without validity, research findings may be irrelevant, misleading, or incorrect, limiting their applicability and credibility.

Reliability in research refers to the consistency and stability of measurements over time. If a study is reliable, repeating the experiment or test under the same conditions should produce similar results. Without reliability, findings become unpredictable and lack dependability, potentially undermining the study’s credibility and generalisability.

What is reliability in psychology?

In psychology, reliability refers to the consistency of a measurement tool or test. A reliable psychological assessment produces stable and consistent results across different times, situations, or raters. It ensures that an instrument’s scores are not due to random error, making the findings dependable and reproducible in similar conditions.

What is test retest reliability?

Test-retest reliability assesses the consistency of measurements taken by a test over time. It involves administering the same test to the same participants at two different points in time and comparing the results. A high correlation between the scores indicates that the test produces stable and consistent results over time.

How to improve reliability of an experiment?

  • Standardise procedures and instructions.
  • Use consistent and precise measurement tools.
  • Train observers or raters to reduce subjective judgments.
  • Increase sample size to reduce random errors.
  • Conduct pilot studies to refine methods.
  • Repeat measurements or use multiple methods.
  • Address potential sources of variability.

What is the difference between reliability and validity?

Reliability refers to the consistency and repeatability of measurements, ensuring results are stable over time. Validity indicates how well an instrument measures what it’s intended to measure, ensuring accuracy and relevance. While a test can be reliable without being valid, a valid test must inherently be reliable. Both are essential for credible research.

Are interviews reliable and valid?

Interviews can be both reliable and valid, but they are susceptible to biases. The reliability and validity depend on the design, structure, and execution of the interview. Structured interviews with standardised questions improve reliability. Validity is enhanced when questions accurately capture the intended construct and when interviewer biases are minimised.

Are IQ tests valid and reliable?

IQ tests are generally considered reliable, producing consistent scores over time. Their validity, however, is a subject of debate. While they effectively measure certain cognitive skills, whether they capture the entirety of “intelligence” or predict success in all life areas is contested. Cultural bias and over-reliance on tests are also concerns.

Are questionnaires reliable and valid?

Questionnaires can be both reliable and valid if well-designed. Reliability is achieved when they produce consistent results over time or across similar populations. Validity is ensured when questions accurately measure the intended construct. However, factors like poorly phrased questions, respondent bias, and lack of standardisation can compromise their reliability and validity.

You May Also Like

Content analysis is used to identify specific words, patterns, concepts, themes, phrases, or sentences within the content in the recorded communication.

In historical research, a researcher collects and analyse the data, and explain the events that occurred in the past to test the truthfulness of observations.

You can transcribe an interview by converting a conversation into a written format including question-answer recording sessions between two or more people.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

To read this content please select one of the options below:

Please note you do not have access to teaching notes, comprehensive criteria to judge validity and reliability of qualitative research within the realism paradigm.

Qualitative Market Research

ISSN : 1352-2752

Article publication date: 1 September 2000

Aims to address a gap in the literature about quality criteria for validity and reliability in qualitative research within the realism scientific paradigm. Six comprehensive and explicit criteria for judging realism research are developed, drawing on the three elements of a scientific paradigm of ontology, epistemology and methodology. The first two criteria concern ontology, that is, ontological appropriateness and contingent validity. The third criterion concerns epistemology: multiple perceptions of participants and of peer researchers. The final three criteria concern methodology: methodological trustworthiness, analytic generalisation and construct validity. Comparisons are made with criteria in other paradigms, particularly positivism and constructivism. An example of the use of the criteria is given. In conclusion, this paper’s set of six criteria will facilitate the further adoption of the realism paradigm and its evaluation in marketing research about, for instance, networks and relationship marketing.

  • Marketing research
  • Qualitative techniques
  • Case studies

Healy, M. and Perry, C. (2000), "Comprehensive criteria to judge validity and reliability of qualitative research within the realism paradigm", Qualitative Market Research , Vol. 3 No. 3, pp. 118-126. https://doi.org/10.1108/13522750010333861

Copyright © 2000, MCB UP Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

The Qualitative Report Validity in Qualitative Research: A Processual Approach

  • January 2019
  • The Qualitative Report 24(1):98-112
  • 24(1):98-112

Paulo Hayashi Junior at University of Campinas

  • University of Campinas

Gustavo Abib at Universidade Federal do Paraná

  • Universidade Federal do Paraná

Norberto Hoppen at Universidade do Vale do Rio dos Sinos

  • Universidade do Vale do Rio dos Sinos

Abstract and Figures

The processual construction of the validity Source: The authors

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Sajid Hasni

  • Seyed Mohamad Sadegh Hayeri Zadeh

Mohammad Ghomeishi

  • Lizardo Vargas-Bianchi
  • Ahmed Akbar
  • Ali Shah Alnaim
  • Sutton Hedges Falahat
  • Kamariah Ismail
  • Syahnur Farhana Haji Shahlehi
  • Vivi Nabilah Shaya

Faisal Amjad

  • Abdul Qudoos
  • Veronica Starkov
  • Rolf Saarna
  • Merike Kont

Ben K. Daniel

  • Clare Jinks
  • Patricia Fusch

Gene Fusch

  • Michael Barrett

Maria Mayan

  • Jeanine Evers

Nezar Faris

  • J.M. Van Der Maren
  • Dyer, W.G., Jr

Alan L. Wilkins

  • Esin Zelal Yazıcı

Ali Delice

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

©2024 Rey Ty Validity and Reliability in Qualitative, Quantitative, & Mixed Methods Research

Profile image of Rey Ty

Related Papers

Ana Trujillo Zapata

validity and reliability in research example qualitative

VNU Journal of Foreign Studies

Vu Thi Thanh Nha

Educational constructs change over time to reflect developments in research and educational approaches. To illustrate the process, this article aims to examine validity and reliability, which are important concepts to justify research quality. Originally, validity and reliability were applied to quantitative research. However, these criteria can not be equally applied to qualitative research studies which differ in terms of their theoretical foundations and research aims. The unclear use of these concepts might lead to inappropriate research design or evaluation. This paper, therefore, first examines two different theoretical foundations underlying these two research traditions. It then analyses the subtle variations to clarify the notions of reliability and validity. Some implications are made for researchers to flexibly employ these criteria to enhance their research rigor.

Journal of Advanced Nursing

Pamela Hinds

Deepak P Kafle

In general practice, qualitative research contributes as significantly as quantitative research and both try to find the same result; the truth. Qualitative research, also known as naturalistic inquiry, evolved inside the social and human sciences refers to theories on interpretation and human experience. The use of validity and reliability are common in quantitative research and currently, there are ongoing debates regarding whether the terms are appropriate to evaluate qualitative studies. Although there is no universally typical terminology and standards used to measure qualitative studies, all qualitative researchers comprise strategies to enhance the credibility of a study throughout the research design and implementation. The main aim of this article is to provide the concepts of validity and reliability and to ascertain that it is possible for qualitative research to be properly valid or reliable.

Business & Management Studies: An International Journal

Lütfi Sürücü

The Validity and Reliability of the scales used in research are essential factors that enable the research to yield beneficial results. For this reason, it is useful to understand how the Reliability and Validity of the scales are measured correctly by researchers. The primary purpose of this study is to provide information on how the researchers test the Validity and Reliability of the scales used in their empirical studies and to provide resources for future research. For this purpose, the concepts of Validity and Reliability are introduced, and detailed explanations have been provided regarding the main methods used in the evaluation of Validity and Reliability with examples taken from the literature. It is considered that this study, which is a review, will attract the attention of researchers.

Evidence-based nursing

Helen Noble

Athenkosi Mpemba

Mohammed Ali Bapir

With reference to definitions of validity and reliability, and drawing extensively on conceptualisations of qualitative research, this essay examines the correlation between the reliability of effort to find answers to questions about the social world, and the validity of conclusions drawn from such attempts. This is to point out the fundamental position to the role of theory in relation to research; as an inductivist strategy qualitative research tries to confer the correspondence between reality and representation. The problem of validity and reliability in qualitative research is entwined with the definition of qualitative research and the possibility to mirror this in practice to make a qualitative research properly valid and reliable. That presents both challenges and chances to qualitative researchers; yet, with taking into consideration qualitative criteria in social research, achieving validity and as well as reliability in qualitative research is not impossible.

Dr Muhammad Azeem

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

shiroashi fullbuster

Katy Pearce

Educational Action Research

Nigel Norris

Diego Martinez

Journal of Nursing Scholarship

Elizabeth Kostas-Polston , Suzanne Savoy

Zenodo (CERN European Organization for Nuclear Research)

Mohamad Adam Bujang

Journal of Music Therapy

Abbas Tashakkori

International Journal of Nursing Studies

Pierre Pluye

Mohammed Worku

Issues in Social and Environmental Accounting

Hamidah Yusof

Journal Article

Muhammad Adeel

Qualitative Research

International Journal of …

Robbyn Seller

Journal of Mixed Methods Research

Joshua Kellison

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Examples of qualitative data.

What is qualitative data? How to understand, collect, and analyze it

A comprehensive guide to qualitative data, how it differs from quantitative data, and why it's a valuable tool for solving problems.

What is qualitative research?

Importance of qualitative data.

  • Differences between qualitative and quantitative data

Characteristics of qualitative data

Types of qualitative data.

  • Pros and cons
  • Collection methods

Everything that’s done digitally—from surfing the web to conducting a transaction—creates a data trail. And data analysts are constantly exploring and examining that trail, trying to find out ways to use data to make better decisions.

Different types of data define more and more of our interactions online—one of the most common and well-known being qualitative data or data that can be expressed in descriptions and feelings. 

This guide takes a deep look at what qualitative data is, what it can be used for, how it’s collected, and how it’s important to you. 

Key takeaways: 

Qualitative data gives insights into people's thoughts and feelings through detailed descriptions from interviews, observations, and visual materials.

The three main types of qualitative data are binary, nominal, and ordinal.

There are many different types of qualitative data, like data in research, work, and statistics. 

Both qualitative and quantitative research are conducted through surveys and interviews, among other methods. 

What is qualitative data?

Qualitative data is descriptive information that captures observable qualities and characteristics not quantifiable by numbers. It is collected from interviews, focus groups, observations, and documents offering insights into experiences, perceptions, and behaviors.

Qualitative data analysis cannot be counted or measured because it describes the data. It refers to the words or labels used to describe certain characteristics or traits.

This type of data answers the "why" or "how" behind the analysis . It’s often used to conduct open-ended studies, allowing those partaking to show their true feelings and actions without direction.

Think of qualitative data as the type of data you’d get if you were to ask someone why they did something—what was their reasoning? 

Qualitative research not only helps to collect data, it also gives the researcher a chance to understand the trends and meanings of natural actions. 

This type of data research focuses on the qualities of users—the actions behind the numbers. Qualitative research is the descriptive and subjective research that helps bring context to quantitative data. 

It’s flexible and iterative. For example: 

The music had a light tone that filled the kitchen.

Every blue button had white lettering, while the red buttons had yellow. 

The little girl had red hair with a white hat.

Qualitative data is important in determining the frequency of traits or characteristics. 

Understanding your data can help you understand your customers, users, or visitors better. And, when you understand your audience better, you can make them happier.  First-party data , which is collected directly from your own audience, is especially valuable as it provides the most accurate and relevant insights for your specific needs.

Qualitative data helps the market researcher answer questions like what issues or problems they are facing, what motivates them, and what improvements can be made.

Examples of qualitative data

You’ve most likely used qualitative data today. This type of data is found in your everyday work and in statistics all over the web. Here are some examples of qualitative data in descriptions, research, work, and statistics. 

Qualitative data in descriptions

Analysis of qualitative data requires descriptive context in order to support its theories and hypothesis. Here are some core examples of descriptive qualitative data:

The extremely short woman has curly hair and brilliant blue eyes.

A bright white light pierced the small dark space. 

The plump fish jumped out of crystal-clear waters. 

The fluffy brown dog jumped over the tall white fence. 

A soft cloud floated by an otherwise bright blue sky.

Qualitative data in research

Qualitative data research methods allow analysts to use contextual information to create theories and models. These open- and closed-ended questions can be helpful to understand the reasoning behind motivations, frustrations, and actions —in any type of case. 

Some examples of qualitative data collection in research:

What country do you work in? 

What is your most recent job title? 

How do you rank in the search engines? 

How do you rate your purchase: good, bad, or exceptional?

Qualitative data at work

Professionals in various industries use qualitative observations in their work and research. Examples of this type of data in the workforce include:

A manager gives an employee constructive criticism on their skills. "Your efforts are solid and you understand the product knowledge well, just have patience."

A judge shares the verdict with the courtroom. "The man was found not guilty and is free to go."

A sales associate collects feedback from customers. "The customer said the check-out button did not work.”

A teacher gives feedback to their student. "I gave you an A on this project because of your dedication and commitment to the cause."

A digital marketer watches a session replay to get an understand of how users use their platform.

Qualitative data in statistics

Qualitative data can provide important statistics about any industry, any group of users, and any products. Here are some examples of qualitative data set collections in statistics:

The age, weight, and height of a group of body types to determine clothing size charts. 

The origin, gender, and location for a census reading.

The name, title, and profession of people attending a conference to aid in follow-up emails.

Difference between qualitative and quantitative data

Qualitative and quantitative data are much different, but bring equal value to any data analysis. When it comes to understanding data research, there are different analysis methods, collection types and uses. 

Here are the differences between qualitative and quantitative data :

Qualitative data is individualized, descriptive, and relating to emotions.

Quantitative data is countable, measurable and relating to numbers.

Qualitative data helps us understand why, or how something occurred behind certain behaviors .

Quantitative data helps us understand how many, how much, or how often something occurred. 

Qualitative data is subjective and personalized.

Quantitative data is fixed and ubiquitous.

Qualitative research methods are conducted through observations or in-depth interviews.

Quantitative research methods are conducted through surveys and factual measuring. 

Qualitative data is analyzed by grouping the data into classifications and topics. 

Quantitative data is analyzed using statistical analysis.

Both provide a ton of value for any data collection and are key to truly understanding trending use cases and patterns in behavior . Dig deeper into quantitative data examples .

Qualtitative vs quantitative examples

The characteristics of qualitative data are vast. There are a few traits that stand out amongst other data that should be understood for successful data analysis. 

Descriptive : describing or classifying in an objective and nonjudgmental way.

Detailed : to give an account in words with full particulars.

Open-ended : having no determined limit or boundary.

Non-numerical : not containing numbers. 

Subjective : based on or influenced by personal feelings, tastes, or opinions.

With qualitative data samples, these traits can help you understand the meaning behind the equation—or for lack of a better term, what’s behind the results. 

As we narrow down the importance of qualitative data, you should understand that there are different data types. Data analysts often categorize qualitative data into three types:

1. Binary data

Binary data is numerically represented by a combination of zeros and ones. Binary data is the only category of data that can be directly understood and executed by a computer.

Data analysts use binary data to create statistical models that predict how often the study subject is likely to be positive or negative, up or down, right or wrong—based on a zero scale.

2. Nominal data

Nominal data , also referred to as “named, labeled data” or “nominal scaled data,” is any type of data used to label something without giving it a numerical value. 

Data analysts use nominal data to determine statistically significant differences between sets of qualitative data. 

For example, a multiple-choice test to profile participants’ skills in a study.

3. Ordinal data

Ordinal data is qualitative data categorized in a particular order or on a ranging scale. When researchers use ordinal data, the order of the qualitative information matters more than the difference between each category. Data analysts might use ordinal data when creating charts, while researchers might use it to classify groups, such as age, gender, or class.

For example, a Net Promoter Score ( NPS ) survey has results that are on a 0-10 satisfaction scale. 

When should you use qualitative research?

One of the important things to learn about qualitative data is when to use it. 

Qualitative data is used when you need to determine the particular trends of traits or characteristics or to form parameters for larger data sets to be observed. Qualitative data provides the means by which analysts can quantify the world around them.

You would use qualitative data to help answer questions like who your customers are, what issues or problems they’re facing, and where they need to focus their attention, so you can better solve those issues.

Qualitative data is widely used to understand language consumers speak—so apply it where necessary. 

Pros and cons of qualitative data

Qualitative data is a detailed, deep understanding of a topic through observing and interviewing a sample of people. There are both benefits and drawbacks to this type of data. 

Pros of qualitative data

Qualitative research is affordable and requires a small sample size.

Qualitative data provides a predictive element and provides specific insight into development.

Qualitative research focuses on the details of personal choice and uses these individual choices as workable data.

Qualitative research works to remove bias from its collected data by using an open-ended response process.

Qualitative data research provides useful content in any thematic analysis.

Cons of qualitative data 

Qualitative data can be time-consuming to collect and can be difficult to scale out to a larger population.

Qualitative research creates subjective information points.

Qualitative research can involve significant levels of repetition and is often difficult to replicate.

Qualitative research relies on the knowledge of the researchers.

Qualitative research does not offer statistical analysis, for that, you have to turn to quantitative data.

Qualitative data collection methods

Here are the main approaches and collection methods of qualitative studies and data: 

1. Interviews

Personal interviews are one of the most commonly used deductive data collection methods for qualitative research, because of its personal approach.

The interview may be informal and unstructured and is often conversational in nature. The interviewer or the researcher collects data directly from the interviewee one-to-one. Mostly the open-ended questions are asked spontaneously, with the interviewer allowing the flow of the interview to dictate the questions and answers.

The point of the interview is to obtain how the interviewee feels about the subject. 

2. Focus groups

Focus groups are held in a discussion-style setting with 6 to 10 people. The moderator is assigned to monitor and dictate the discussion based on focus questions.

Depending on the qualitative data that is needed, the members of the group may have something in common. For example, a researcher conducting a study on dog sled runners understands dogs, sleds, and snow and would have sufficient knowledge of the subject matter.

3. Data records 

Data doesn’t start with your collection, it has most likely been obtained in the past. 

Using already existing reliable data and similar sources of information as the data source is a surefire way to obtain qualitative research. Much like going to a library, you can review books and other reference material to collect relevant data that can be used in the research.

For example, if you were to study the trends of dictionaries, you would want to know the past history of every dictionary made, starting with the very first one. 

4. Observation

Observation is a longstanding qualitative data collection method, where the researcher simply observes behaviors in a participant's natural setting. They keep a keen eye on the participants and take down transcript notes to find out innate responses and reactions without prompting. 

Typically observation is an inductive approach, which is used when a researcher has very little or no idea of the research phenomenon. 

Other documentation methods, such as video recordings, audio recordings, and photo imagery, may be used to obtain qualitative data.

Further reading: Site observations through heatmaps

5. Case studies

Case studies are an intensive analysis of an individual person or community with a stress on developmental factors in relation to the environment. 

In this method, data is gathered by an in-depth analysis and is used to understand both simple and complex subjects. The goal of a case study is to see how using a product or service has positively impacted the subject, showcasing a solution to a problem or the like. 

6. Longitudinal studies

A longitudinal study is where people who share a single characteristic are studied over a period of time. 

This data collection method is performed on the same subject repeatedly over an extended period. It is an observational research method that goes on for a few years and, in some cases, decades. The goal is to find correlations of subjects with common traits.

For example, medical researchers conduct longitudinal studies to ascertain the effects of a drug or the symptoms related.

Qualitative data analysis tools

And, as with anything—you aren’t able to be successful without the right tools. Here are a few qualitative data analysis tools to have in your toolbox: 

MAXQDA —A qualitative and mixed-method data analysis software 

Fullstory —A behavioral data and analysis platform

ATLAS.ti —A powerful qualitative data tool that offers AI-based functions 

Quirkos —Qualitative data analysis software for the simple learner

Dedoose —A project management and analysis tool for collaboration and teamwork

Taguette —A free, open-source, data analysis and organization platform 

MonkeyLearn —AI-powered, qualitative text analysis, and visualization tool 

Qualtrics —Experience management software

Frequently asked questions about qualitative data

Is qualitative data subjective.

Yes, categorical data or qualitative data is information that cannot generally be proven. For instance, the statement “the chair is too small” depends on what it is used for and by whom it is being used.

Who uses qualitative data?

If you’re interested in the following, you should use qualitative data:

Understand emotional connections to your brand

Identify obstacles in any funnel, for example with session replay

Uncover confusion about your messaging

Locate product feature gaps 

Improve usability of your website, app, or experience

Observe how people talk, think, and feel about your brand

Learn how an organization selects vendors and partners

What are the steps for qualitative data?

1. Transcribe your data : Once you’ve collected all the data, you need to transcribe it. The first step in analyzing your data is arranging it systematically. Arranging data means converting all the data into a text format. 

2. Organize your data : Go back to your research objectives and organize the data based on the questions asked. Arrange your research objective in a table, so it appears visually clear. Avoid working with unorganized data, there will be no conclusive results obtained.

3. Categorize and assign the data : The coding process of qualitative data means categorizing and assigning variables, properties, and patterns. Coding is an important step in qualitative data analysis, as you can derive theories from relevant research findings. You can then begin to gain in-depth insight into the data that help make informed decisions.

4. Validate your data : Data validation is a recurring step that should be followed throughout the research process. There are two sides to validating data: the accuracy and reliability of your research methods, which is the extent to which the methods produce accurate data consistently. 

5. Conclude the data analysis : Present your data in a report that shares the method used to conduct the research studies, the outcomes, and the projected hypothesis of your findings in any related areas.

Is qualitative data better than quantitative data?

One is not better than the other, rather they work cohesively to create a better overall data analysis experience. Understanding the importance of both qualitative and quantitative data is going to produce the best possible data content analysis outcome for any study. 

Further reading : Qualitative vs. quantitative data — what's the difference?

Related posts

validity and reliability in research example qualitative

Learn how to analyze qualitative data. We show examples of how to collect, organize, and analyze qualitative data to gain insights.

Here's how you can quantitatively analyze your qualitative digital experience data to unlock an entirely new workflow.

A number sign

Quantitative data is used for calculations or obtaining numerical results. Learn about the different types of quantitative data uses cases and more.

A person next to a chart graph

Qualitative and quantitative data differ on what they emphasize—qualitative focuses on meaning, and quantitative emphasizes statistical analysis.

  • Open access
  • Published: 18 July 2024

Parent version of the Eating Disorder Examination: Reliability and validity in a treatment-seeking sample

  • Lisa Hail 1 ,
  • Catherine R. Drury 1 , 2 ,
  • Robert E. McGrath 2 ,
  • Stuart B. Murray 3 ,
  • Elizabeth K. Hughes 4 , 5 ,
  • Susan M. Sawyer 5 ,
  • Daniel Le Grange 1 , 6 &
  • Katharine L. Loeb 2 , 7  

Journal of Eating Disorders volume  12 , Article number:  101 ( 2024 ) Cite this article

15 Accesses

1 Altmetric

Metrics details

Assessment of eating disorders (ED) in youth relies heavily on self-report, yet persistent lack of recognition of the presence and/or seriousness of symptoms can be intrinsic to ED. This study examines the psychometric properties of a semi-structured interview, the parent version of the Eating Disorder Examination (PEDE), developed to systematically assess caregiver report of symptoms.

A multi-site, clinical sample of youth ( N  = 522; age range: 12 to 18 years) seeking treatment for anorexia nervosa (AN) and subsyndromal AN were assessed using the Eating Disorder Examination (EDE) for youth and the PEDE for collateral caregiver report.

Internal consistencies of the four PEDE subscales were on par with established ranges for the EDE. Significant medium-sized correlations and poor to moderate levels of agreement were found between the corresponding subscales on each measure. For the PEDE, confirmatory factor analysis of the EDE four-factor model provided a poor fit; an exploratory factor analysis indicated that a 3-factor model better fits the PEDE.

Conclusions

Findings suggest that the PEDE has psychometric properties on par with the original EDE. The addition of the caregiver perspective may provide incremental information that can aid in the assessment of AN in youth. Future research is warranted to establish psychometric properties of the PEDE in broader transdiagnostic ED samples.

Plain English summary

Assessments for eating disorders rely primarily on self-report; yet, the denial of symptoms or symptom severity among adolescents with anorexia nervosa can complicate assessment and delay treatment in this population. The Parent Eating Disorder Examination (PEDE) is the first semi-structured interview formally developed to improve childhood eating disorder assessment by including caregiver perspectives. In this study, a large sample of adolescents with anorexia nervosa completed a self-report interview (the Eating Disorder Examination or EDE) and their parents completed the PEDE. The PEDE appeared to measure parents’ report of their child’s eating disorder symptoms consistently. Results from both interviews were related to one another but did not completely agree. This suggests that in an eating disorder assessment, the PEDE can provide additional information from caregivers that might reduce diagnostic confusion and lead to earlier intervention for youth with anorexia nervosa.

With the publication of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5 ) [ 1 ], the criteria for diagnosing eating disorders (ED) were revised to reflect greater developmental sensitivity for youth. These modifications were particularly important as the onset of ED is most common in adolescence [ 2 ]. However, there remain many challenges to diagnosing restrictive ED, such as anorexia nervosa (AN), in children and adolescents, which could delay treatment of a pernicious, often refractory disorder.

One of the most significant challenges in relying on self-report in ED assessment and case identification is the persistent lack of recognition of the seriousness of symptoms, a core diagnostic feature of AN, which renders history of illness and present symptoms vulnerable to inaccuracies [ 3 , 4 , 5 ]. However, typical assessment methods for ED rely primarily on self-report, and may therefore be insufficient, particularly for younger individuals [ 6 , 7 , 8 , 9 ]. Compared to adults, adolescents generally score lower on measures of ED pathology despite similar levels of malnutrition [ 10 ], and appear to experience ED symptoms differently [ 11 ]. Minimization might be intrinsic to a developmentally normative limitation in recognizing the potential consequences of risky behaviors such as those associated with ED [ 5 , 10 , 12 , 13 ]. Shorter duration of illness could compound this, further limiting adolescents’ appreciation of the current and future impact of what could in fact become a severe and enduring disorder [ 14 , 15 ]. Relatedly, adolescents are unlikely to independently seek help for their ED, and may even engage in strategic minimization of symptoms, to avoid the implications of symptom endorsement (e.g., intervention efforts on the part of adults).

In addition, there are cognitive and emotional obstacles to evaluating symptoms of AN in youth, as several of the criteria are psychological in nature. For example, the ability to report a fear of weight gain requires that the young person be able to recognize and label their affective state correctly, and to identify the motivation behind their behavior [ 15 , 16 ]. Other criteria are more abstract in nature (e.g., disturbance in the experience of shape and weight, undue influence of shape and weight on self-evaluation), and require the developmental maturation of abstract reasoning to recognize and endorse ED symptoms [ 2 , 17 , 18 , 19 ].

The utility of multi-informant methods of assessing child psychopathology is long-established, and approaches have advanced over time [ 6 , 20 , 21 ]. However, most measures used for youth with ED – with notable exceptions [ 3 ] – rely exclusively on direct patient report [ 7 ] despite the unique risks posed by false negatives in case identification, particularly of AN. Two studies have examined parent-child concordance on the Eating Disorder Examination (EDE) [ 22 ] by administering the interview to parents with minimal modifications to the measure [ 18 , 23 ]. For example, Couturier and colleagues [ 23 ] simply changed wording of questions from you to your child and retained items reflecting the internal experience of the child without prompting parents for data on why and how these experiences can be inferred through the child’s behavior. They found that youth with AN scored lower than their parents on two EDE subscales (Restraint and Weight Concern), while Mariano and colleagues [ 18 ] found good concordance between youth and parent scores. Mariano and colleagues [ 18 ] proposed that adolescents in their study were less likely to minimize their symptoms due to the timing of EDE administration (i.e., at the end of a two-day psychological assessment). It is also possible that more extensive adaptations to the EDE are needed to assist parents in consistently providing a comprehensive report of symptoms.

To address the need for a standardized method for including parental report in the assessment of ED, we developed a parent version of the EDE (PEDE) [ 7 , 24 , 36 ], with permission and input from the first author of the original measure [ 22 ], that mirrors the EDE but includes detailed questions to assess for observable indicators of ED. Although the EDE can be administered as young as 14 years and has been adapted for use in children aged 8 years and older [ 25 ], these assessments do not incorporate caregiver perspectives. Thus, the overall objective of the current study was to evaluate the psychometric properties of the PEDE in a large, multi-site sample of children and adolescents seeking treatment for AN and subsyndromal AN (SAN). Specifically, we examined the internal consistency of the PEDE subscales and the PEDE’s convergent and construct validity in relation to the EDE. We also aimed to compare PEDE and EDE rates of AN diagnosis. We hypothesized that:

Internal consistency of the four PEDE subscales (Restraint, Eating Concern, Weight Concern, Shape Concern) and global score as measured by Cronbach’s alpha reliability coefficients would be similar to those previously established for the EDE, which found values in the range from .44 to .85 [ 26 ].

Convergent validity would be demonstrated by small, positive correlations and low to moderate levels of agreement between the established EDE subscales and the corresponding subscales on the PEDE. These positive relationships would indicate that the subscales are tapping into similar constructs. The small effect sizes and low to moderate levels of agreement (as opposed to good or excellent) would suggest that both youth and parent perspectives offer incremental information to an ED assessment [ 23 ].

Both the EDE and PEDE would have a different factor structure than the four original subscales presented by Cooper and colleagues [ 27 ] as no studies evaluating factor structure in the EDE have confirmed this model. No specific hypotheses were possible for the expected factor structure given variations in the results of three prior studies [ 28 , 29 , 30 ], only one of which included adolescents [ 28 ].

The PEDE would yield a diagnosis of AN more frequently than the EDE among participants with both AN and SAN.

Participants

Participants were youth and guardian informants who presented to two research-based ED treatment programs in the United States (US; New York and Chicago) and one in Melbourne, Australia. Researchers at these sites received training on the EDE and PEDE, administered both interviews to youth and their caregivers presenting to clinical research centers for treatment of a suspected ED, and contributed deidentified baseline data as part of this multisite collaboration to establish the PEDE’s psychometric properties. Any larger studies [ 31 , 32 ] from which these deidentified data were derived for secondary analyses were approved by the respective institutions’ institutional review boards; the present study was designated exempt from board review.

In order to assess the reliability and validity of the PEDE in a relatively homogenous sample, this study focused specifically on youth presenting to these sites with probable AN or SAN [ 32 ], a site-specific research category that would fall under other specified feeding and eating disorder (OSFED) in DSM-5-TR nomenclature. The original inspiration for developing the PEDE was to help identify true caseness in the context of underweight ED where denial and minimization are prominent and therefore parental report may be most useful [ 23 ]. Thus, submitted cases ( n =  833) were excluded from analysis if one or more of the following were met: (a) percent expected body weight (EBW) based on median body mass index (mBMI) was greater than 100% ( n  = 232, 27.85%), (b) criteria for bulimia nervosa or binge eating disorder might be met by virtue of 12 or greater EDE objective bulimic episodes in the past three months and weight > 85% of EBW ( n  = 0), (c) age was younger than 12 years ( n  = 83, 9.96% of the full sample), or (d) there was insufficient information to accurately determine EBW ( n  = 1). Although low weight is a relative, personalized construct and population norms are not a valid benchmark against which to determine individual-level weight status, these weight criteria were used to reduce the likelihood of false positives and because not all sites recorded a more individualized measure of EBW and all reported percent of mBMI. The resulting sample included 522 youth paired with guardian informants, ranging in age from 12 to 18 years ( M  = 15.4; SD  = 1.7), 89.7% parent- or self-identified as female, who were at 54–99% of mBMI ( M  = 84.3%; SD  = 8.5). Further demographic data (including caregiver gender identity) were not reported consistently across all sites. The majority of participants were recruited from sites in Chicago ( n  = 219; 42.0%) and Melbourne ( n  = 260; 49.8%); 8.2% of participants ( n  = 43) were recruited from the New York-based site. There was a significant difference in PEDE global scores across sites ( F (2,6) = 7.49, p  = .002, η 2  = 0.03), with guardians in New York reporting higher levels of ED pathology than those in Chicago or Melbourne ( p  = .002). EDE global scores did not significantly differ across sites ( p  = .725).

Eating Disorder Examination (EDE) Version 16.0

The EDE [ 22 ] is a semi-structured clinical interview that was originally developed for use with adults but is also used, and has been found psychometrically acceptable, as a diagnostic and predictive tool with younger populations [ 33 , 34 ]. The EDE is comprised of 33 items and uses a 7-point scale to measure the frequency (0 = “absence of the feature”; 6 = “feature present every day”) and severity (0 = “absence of the feature;” 6 = “feature present to an extreme degree”) of ED attitudes and behaviors. Most of the questions capture data from the past 28 days only, with exception of the ten diagnostic items that extend to the previous three months to reflect the time frame evaluated to make the DSM ED diagnoses. The EDE includes four subscales: Restraint (5 items), Eating Concern (5 items), Shape Concern (8 items), and Weight Concern (5 items). The subscales are averaged to give a rating of global severity. Although these subscales have not been supported in a prior factor analysis, they remain widely used in both research and clinical practice [ 18 ].

Parent Eating Disorder Examination (PEDE)

The PEDE version 1.4 [ 24 ] includes items that directly mirror the content and 7-point scoring scheme of the EDE. While the term “parent” is used, this measure is appropriate to use with any adult who is in the primary caretaking role. In the parent version, endorsement or denial (depending on the item) of a stem question triggers additional queries about behavioral observations and indicators of intent that are not present in the patient-directed EDE. Two additional items were added to the PEDE to assess for refusal to maintain a normal body weight and denial of the seriousness of low body weight , diagnostic features of AN that are not explicitly asked in the EDE. The item reaction to prescribed weighing from the EDE Weight Concern subscale was excluded because the item proved confusing when piloted. In total, the PEDE has 41 scored items. A symptom is rated as present if the parent has directly observed the phenomenon; heard the child report it; or heard reports from a reliable third party such as other family members, friends, or school personnel.

The PEDE requires that parents use their best judgment, including all available sources of information, in responding to the items. For example, in assessing fear of weight gain , there is not only an item evaluating verbal expression of this fear but also subsequent items assessing for indications that the young person is refusing attempts to increase their weight “by passive resistance (e.g., refusing to eat) and/or active resistance, such as yelling, throwing a tantrum, throwing food or dishes, running away, threatening to hurt themself if made to eat,” or other means. Other examples include specific questions that evaluate evidence of purging behaviors (e.g., “Have you noticed any vomit residue or odor in the bathroom or on your child’s clothes?; “Has your child rushed to the bathroom during a meal or immediately after eating?”).

The PEDE version 1.4 was developed from the EDE version 16.0 [ 22 ] and contains diagnostic items consistent with DSM-IV-TR [ 35 ] diagnostic criteria. Additionally, the PEDE items that assess for behavioral indicators allow for the evaluation of the revised DSM-5 criteria, including those criteria that are not explicitly assessed by the EDE version 16.0 or 17.0 (i.e., refusal to maintain a normal body weight and denial of the seriousness of low body weight ). The PEDE version 2.0 has since been revised aligning the measure with DSM-5 diagnostic criteria and incorporating gender-neutral language, and is publicly available [ 36 ].

Statistical Analyses

Cronbach’s alpha coefficients were calculated to evaluate the internal consistency of the EDE and PEDE subscales and global scores using IBM SPSS Statistics v.24.0, with values less than .5 considered to be unacceptable, greater than or equal to .5 poor, greater than or equal to .6 questionable, greater than or equal to .7 acceptable, greater than or equal to .8 good, and greater than or equal to .9 excellent [ 37 ].

Convergent validity was assessed through the correlation and level of agreement between the EDE and PEDE subscales. Specifically, bivariate Pearson correlations were calculated using IBM SPSS Statistics v.24.0; as suggested by Cohen [ 38 ], .10 was considered a weak or small correlation, .30 medium, and .50 or larger strong or large. Additionally, the level of agreement between the EDE and PEDE subscales and global scores was measured using a two-way random effects model (absolute agreement, average measures) intraclass correlation coefficient (ICC). In accordance with the 95% confidence interval of the ICC estimate, values less than .50 were considered evidence of poor agreement, between .50 and .75 moderate agreement, between .75 and .90 good agreement, and greater than .90 excellent agreement [ 39 ].

To assess the goodness of fit of the original four-factor structure of the traditional EDE subscales developed by Fairburn and colleagues [ 27 ], confirmatory factor analysis (CFA) was conducted with Mplus (version 8.0) [ 40 ]. Model fit was evaluated using incremental fit tests of a “good fit” [ 41 , 42 ], including the Tucker-Lewis index (TLI) ≥ .90 and comparative fit index (CFI) ≥ .90. Two absolute measures of fit were also used: the standard root mean square residual (SRMR) ≤ .08 and root mean square error of approximation (RMSEA) ≤ .10 (< .05 preferred). The same procedure was repeated with the PEDE. Given the results of the CFA, an exploratory factor analysis (EFA) was conducted using IBM SPSS Statistics v.24.0 to determine if an alternate model was a better fit for the PEDE.

Planned analyses for diagnostic agreement between the PEDE and EDE included chi-squared tests and Cohen’s kappa to compare each measure’s diagnostic items.

Internal Consistency

The coefficient alpha values for the four established subscales and global score of the EDE in the present sample ranged from acceptable to excellent: .86 for the Restraint scale, .75 for Eating Concern, .93 for Shape Concern, .83 for Weight Concern, and .93 for the global score. While the PEDE reliability coefficients for the Shape Concern and Weight Concern subscales (.85 and .74, respectively) and the global score (.80) fell in the acceptable to good ranges, alpha coefficients were poor (.59) for the Restraint subscale and unacceptable (.44) for the Eating Concern subscale.

Construct validity

Table  1 shows the results of the Pearson correlations. There were significant medium-sized positive correlations between the corresponding subscales and global scores (all p values < .001) ranging from .36 to .49. In each case, the correlation with the corresponding scale of the other instrument was higher than that with any other scale. Estimates of inter-rater agreement between the EDE and PEDE subscale and global scores are shown in Table  2 . There was moderate agreement between the PEDE and EDE global scores and the Restraint, Shape Concern, and Weight Concern subscale scores, and poor agreement between the Eating Concern subscales.

The CFA for the EDE four-factor model, based on established subscales, approached an acceptable fit after removing the preoccupation with shape or weight item from the Weight Concern factor because of a negative loading (see Table  3 for standardized factor loadings): CFI = .90, TLI = .88, RMSEA = .09, and SRMR = .05. The CFA of the four-factor model for the PEDE provided a poor fit to the data: CFI = .70, TLI = .66, RMSEA = .11 ( SE  = .10, .11); and SRMR = .10.

For the EFA, the scree plot, parallel analysis, and Velicer’s minimum average partial (MAP) tests were conducted, with the latter two based on SPSS macros developed by O’connor [ 43 ]. All three tests supported retaining a three-factor model for the PEDE. Principal axis factoring (PAF) and promax rotation (power = 4) were used to extract the three factors. Loadings above .30 were used as evidence of a meaningful relationship between an item and a factor [ 44 ]. These three factors accounted for 47.7% of the total variance of the items; see Table  3 . One item, avoidance of eating , was not associated with any scale due to insufficient loading. Looking at the items within each factor, they could be labeled as affective preoccupation with shape , weight , and eating (10 items, α  = 0.87, 30.6% of total variance), importance of shape , weight , and restriction (7 items, α  = 0.75, 9.1% of total variance), and discomfort with eating and body display (4 items, α  = 0.58, 8.0% of total variance).

Diagnostic Agreement

We initially planned to assess diagnostic agreement between the PEDE and EDE using chi-squared tests and Cohen’s kappa to compare each measure’s diagnostic items. However, of those participants who were not missing any EDE diagnostic items ( n  = 361), only 237 had no missing PEDE diagnostic items. A t -test comparison of those with and without missing PEDE diagnostic items found that participants without missing PEDE data had significantly higher PEDE global scores ( p  = .002) and significantly lower BMIs ( p  = .013) than participants who were missing PEDE diagnostic items. As the patients who could be included in this analysis appeared to have a more severe ED presentation than the remainder of the sample, results from a PEDE-EDE diagnostic comparison would be difficult to interpret. This confound precluded our conducting the planned analyses to assess diagnostic agreement.

To our knowledge, the PEDE is the first semi-structured interview formally developed with the aim to improve ED assessment in youth through the addition of caregiver perspectives, helping to reduce Type II error rate and under-identification of symptoms in youth with ED [ 45 ]. This study investigated the psychometric properties of the PEDE in a relatively large, international, multisite sample of families seeking treatment for AN and SAN. As predicted, the internal consistency of the PEDE was within the range of what has been published for the EDE (.44 to .85) [ 26 ], though lower than the EDE’s reliability in this sample. Regarding convergent validity, effect sizes were larger than the expected small effect size based on the meta-analytic evidence for parent-child correlation for both internalizing (.26) and externalizing (.32) disorders [ 6 ]. However, the lack of strong concordance between the EDE and PEDE subscales indicates that the information captured by the PEDE is not redundant with the EDE. This finding suggests that information from parent informants complements diagnostic and clinical information over and above that obtained by youth self-report. Specifically, the behavioral indicators and examples provided by the PEDE appear to elicit diagnostically relevant information from parents that might otherwise remain unreported. In clinical practice, such questioning can also serve to educate parents that these behaviors and beliefs are part of the ED and thereby improve their capacity to clinically monitor and intervene to support their child’s recovery.

The EDE is used with four subscales, yet none of the three studies that have examined the factor structure has replicated the four-factor model [ 28 , 29 , 30 ]. In this sample, the original factor structure approached an acceptable fit with the youth self-report data, but only after removing the preoccupation with weight and shape item from the Weight Concern subscale. Given the inconsistency of factor analysis results across studies of the EDE [ 26 ], it was not surprising that another underlying structure of three subscales seemed to provide the best fit for the PEDE. Although the PEDE has an empirically derived, three-factor structure, the original four-subscale model of the PEDE was found to measure constructs similar to those measured by the corresponding EDE scales, based on significant positive associations between corresponding subscales on the youth and parent interview. As such, it is reasonable to utilize the PEDE based on the four-subscale model to maintain consistency for research purposes. When using it for exclusively clinical purposes, the three-factor model may provide more meaningful constructs. Prior research also suggests that the EDE global score is a more useful measure of ED pathology than its subscales [ 46 ]; in light of the current study’s internal consistency and construct validity results, the PEDE global score may also provide a more valid interpretation of its findings.

Limitations of this study include a predominantly female, treatment-seeking sample with specific criteria applied, including the use of population norms (i.e., %mBMI) to determine weight eligibility instead of individualized weight status based on historical growth patterns. These limitations constrained an understanding of how the PEDE interview may perform in more diverse, transdiagnostic (including atypical AN), and non-treatment-seeking samples. Resource limitations prevented duplicate assessments by multiple interviewers to establish inter-rater reliability or compare ratings from caregivers of different genders, but this is worthy of future study, as is test-retest reliability. Furthermore, missing data precluded completion of the diagnostic agreement analyses originally proposed by this study. Although the intent of developing the PEDE was to aid in the identification of AN/SAN, future research should aim to evaluate the measure’s ability to distinguish between transdiagnostic ED cases and non-cases (i.e., criterion validity) as compared to the EDE using samples of adolescents with ED, subsyndromal ED, and no ED, and sensitivity and specificity analyses such as receiver-operator characteristic (ROC) curves. Additional work is also needed to more thoroughly assess the PEDE’s validity and predictive power, including its relationships with other measures of ED and non-ED symptoms, other parent-report measures, clinician-assigned diagnosis, and clinical outcomes. Finally, by applying more sophisticated multi-informant statistical methods [ 21 ], future research could establish how clinicians and researchers systematically integrate potentially conflicting perspectives from youth and their caregivers,.

In summary, the use of parental informants is consistent with the approach to assessment of other areas of psychopathology in youth in which collateral informants frequently aid in the evaluation and diagnosis process [ 6 , 20 , 47 ]. The introduction of the PEDE allows for a standardized way to incorporate caregiver reports to aid in the assessment of AN, potentially reducing diagnostic ambiguity and compensating for the denial and minimization inherent in the self-report of symptoms within the group. Our future research will focus on differences in diagnostic rates when parents are enlisted as informants in interview-based AN case identification efforts. Enhanced assessment approaches can theoretically make identification of clinically significant presentations more efficient and accurate, and lead to earlier intervention and improved outcomes.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. The PEDE 2.0 is available at https://ccebt.com/wp-content/uploads/2024/06/PEDE-2.0_gender-neutral.pdf .

Abbreviations

Body mass index

Confirmatory factor analysis

Comparative fit index

Diagnostic and Statistical Manual of Mental Disorders

Expected body weight

Eating disorders

Eating Disorder Examination

Exploratory factor analysis

Intraclass correlation coefficient

Minimum average partial

Median body mass index

Other specified feeding and eating disorder

Principal axis factoring

Parent Eating Disorder Examination

Root mean square error of approximation

Subsyndromal anorexia nervosa

Standard root mean square residual

Tucker-Lewis index

United States

American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Washington: American Psychiatric Association; 2013.

Book   Google Scholar  

Lock J. An update on evidence-based psychosocial treatments for eating disorders in children and adolescents. J Clin Child Adolesc Psychol. 2015;44:707–21.

Article   PubMed   Google Scholar  

Accurso EC, Waller G. Concordance between youth and caregiver report of eating disorder psychopathology: development and psychometric properties of the eating Disorder-15 for Parents/Caregivers (ED‐15‐P). Int J Eat Disord. 2021;54:1302–6.

Article   PubMed   PubMed Central   Google Scholar  

Vandereycken W. Denial of illness in anorexia nervosa—a conceptual review: part 1 diagnostic significance and assessment. Eur Eat Disord Rev. 2006;14:341–51.

Article   Google Scholar  

Vitousek KB, Daly J, Heiser C. Reconstructing the internal world of the eating-disordered individual: overcoming denial and distortion in self-report. Int J Eat Disord. 1991;10:647–66.

De Los Reyes A, Augenstein TM, Wang M, Thomas SA, Drabick DAG, Burgers DE, et al. The validity of the multi-informant approach to assessing child and adolescent mental health. Psychol Bul. 2015;141:858–900.

Loeb KL, Brown M, Munk Goldstein M. Assessment of eating disorders in children and adolescents. In: Le Grange D, Lock J, editors. Eating disorders in children and adolescents: a clinical handbook. New York: Guilford Press; 2011. pp. 156–98.

Google Scholar  

O’Logbon J, Newlove-Delgado T, McManus S, Mathews F, Hill S, Sadler K, et al. How does the increase in eating difficulties according to the Development and Well‐Being Assessment screening items relate to the population prevalence of eating disorders? An analysis of the 2017 Mental Health in Children and Young people survey. Int J Eat Disord. 2022;55:1777–87.

Swanson SA, Aloisio KM, Horton NJ, Sonneville KR, Crosby RD, Eddy KT, et al. Assessing eating disorder symptoms in adolescence: is there a role for multiple informants? Int J Eat Disord. 2014;47:475–82.

Couturier JL, Lock J. Denial and minimization in adolescents with anorexia nervosa. Int J Eat Disord. 2006;39:212–6.

Micali N, House J. Assessment measures for child and adolescent eating disorders: a review. Child Adolesc Ment Health. 2011;16:122–7.

Becker AE, Eddy KT, Perloe A. Clarifying criteria for cognitive signs and symptoms for eating disorders in DSM-V. Int J Eat Disord. 2009;42:611–9.

Loeb KL, Jones J, Roberto CA, Sonia Gugga S, Marcus SM, Attia E, et al. Adolescent–adult discrepancies on the eating disorder examination: a function of developmental stage or severity of illness? Int J Eat Disord. 2011;44:567–72.

Austin A, Flynn M, Richards K, Hodsoll J, Duarte TA, Robinson P, et al. Duration of untreated eating disorder and relationship to outcomes: a systematic review of the literature. Eur Eat Disord Rev. 2021;29:329–45.

Fisher M, Schneider M, Burns J, Symons H, Mandel FS. Differences between adolescents and young adults at presentation to an eating disorders program. J Adolesc Health. 2001;28:222–7.

Cooper PJ, Watkins B, Bryant-Waugh R, Lask B. The nosological status of early onset anorexia nervosa. Psychol Med. 2002;32:873–80.

Bravender T, Bryant-Waugh R, Herzog D, Katzman D, Kriepe RD, Lask B, et al. Classification of eating disturbance in children and adolescents: proposed changes for the DSM‐V. Eur Eat Disord Rev. 2010;18:79–89.

Mariano P, Watson HJ, Leach DJ, McCormack J, Forbes DA. Parent–child concordance in reporting of child eating disorder pathology as assessed by the eating disorder examination. Int J Eat Disord. 2013;46:617–25.

Rosso IM, Young AD, Femia LA, Yurgelun-Todd DA. Cognitive and emotional components of frontal lobe functioning in childhood and adolescence. Ann NY Acad Sci. 2004;1021:355–62.

Kuhn C, Aebi M, Jakobsen H, Banaschewski T, Poustka L, Grimmer Y, et al. Effective mental health screening in adolescents: should we collect data from youth, parents or both? Child Psychiatry Hum Dev. 2017;48:385–92.

Martel MM, Markon K, Smith GT. Research review: multi-informant integration in child and adolescent psychopathology diagnosis. J Child Psychol Psychiatry. 2017;58:116–28.

Fairburn CG, Cooper Z, O’Connor M. Eating disorder examination (16.0D). Fairburn CG. Cognitive behavior therapy and eating disorders. New York: Guilford Press; 2008. pp. 270–306.

Couturier J, Lock J, Forsberg S, Vanderheyden D, Yen HL. The addition of a parent and clinician component to the eating disorder examination for children and adolescents. Int J Eat Disord. 2007;40:472–5.

Loeb KL. Eating Disorder Examination – Parent Version (P-EDE), version 1.4. 2008. Unpublished measure based on Fairburn CG, Cooper Z, O’Connor M. Eating Disorder Examination (16.0D). In: Fairburn CG. Cognitive behavior therapy and eating disorders. New York: Guilford Press; 2008. pp. 270–306.

Bryant-Waugh RJ, Cooper PJ, Taylor CL, Lask BD. The use of the eating disorder examination with children: a pilot study. Int J Eat Disord. 1996;19:391–7.

Berg KC, Peterson CB, Frazier P, Crow SJ. Psychometric evaluation of the eating disorder examination and eating disorder Examination-Questionnaire: a systematic review of the literature. Int J Eat Disord. 2012;45:428–38.

Cooper Z, Cooper PJ, Fairburn CG. The validity of the eating disorder examination and its subscales. Br J Psychiatry. 1989;154:807–12.

Byrne SM, Allen KL, Lampard AM, Dove ER, Fursland A. The factor structure of the eating disorder examination in clinical and community samples. Int J Eat Disord. 2010;43:260–5.

Mannucci E, Ricca V, Di Bernardo M, Moretti S, Cabras PL, Rotella CM. Psychometric properties of EDE 12.0D in obese adult patients without binge eating disorder. Eat Weight Disord. 1997;2:144–9.

Grilo CM, Crosby RD, Peterson CB, Masheb RM, White MA, Crow SJ, et al. Factor structure of the eating disorder examination interview in patients with binge-eating disorder. Obesity. 2010;18:977–81.

Hughes EK, Le Grange D, Court A, Yeo MS, Campbell S, Allan E, et al. Parent-focused treatment for adolescent anorexia nervosa: a study protocol of a randomised controlled trial. BMC Psychiatry. 2014;14:105.

Loeb KL, Weissman RS, Marcus S, Pattanayak C, Hail L, Kung KC, et al. Family-based treatment for anorexia nervosa symptoms in high-risk youth: a partially-randomized preference-design study. Front Psychiatry. 2020;10:985.

Passi VA, Bryson SW, Lock J. Assessment of eating disorders in adolescents with anorexia nervosa: self-report questionnaire versus interview. Int J Eat Disord. 2003;33:45–54.

Wade TD, Byrne S, Bryant-Waugh R. The eating disorder examination: norms and construct validity with young and middle adolescent girls. Int J Eat Disord. 2008;41:551–8.

American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 4th ed., text revision. Washington: American Psychiatric Association; 2000.

Loeb KL. Eating Disorder Examination – Parent Version (PEDE), Version 2.0. 2017. https://ccebt.com/wp-content/uploads/2024/06/PEDE-2.0_gender-neutral.pdf . Accessed 16 June 2024.

George D, Mallery P. SPSS for windows step by step: a simple guide and reference, 11.0 update. 4th ed. Boston: Allyn & Bacon; 2003.

Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. New York: Routledge; 2013.

Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15:155–63.

Muthén LK, Muthén BO. Mplus user’s guide. 7th ed. Los Angeles: Muthén & Muthén; 2015.

Byrne B. Structural equation modeling with LISREL, PRELIS, and SIMPLIS. Hillsdale: Lawrence Erlbaum; 1998.

Geiser C. Data analysis with Mplus. New York: Guildford; 2010.

O’connor BP. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behav Res Methods Instrum Compu. 2000;32:396–402.

Costello AB, Osborne JW. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10:1–9.

Murray SB, Loeb KL, Le Grange D. Indexing psychopathology throughout family-based treatment for adolescent anorexia nervosa: are we on track? Adv Eat Disord. 2014;2:93–6.

Jenkins PE, Rienecke RD. Structural validity of the eating disorder Examination-Questionnaire: a systematic review. Int J Eat Disord. 2022;55(8):1012–30.

Kraemer HC, Measelle JR, Ablow JC, Essex MJ, Boyce WT, Kupfer DJ. A new approach to integrating data from multiple informants in psychiatric assessment and research: mixing and matching contexts and perspectives. Am J Psychiatry. 2003;160:1566–77.

Download references

Acknowledgements

We gratefully acknowledge Christopher Fairburn’s ongoing mentorship and support in forwarding research-based adaptations of the Eating Disorder Examination, including the parent version discussed in this paper.

This research was supported by a grant from the National Institute of Mental Health K23 MH074506 (PI: Loeb; ClinicalTrials.gov NCT00418977, Early Identification and Treatment of Anorexia Nervosa).

Author information

Authors and affiliations.

Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, UCSF Weill Institute for Neurosciences, 675 18th Street, San Francisco, CA, USA

Lisa Hail, Catherine R. Drury & Daniel Le Grange

School of Psychology and Counseling, Fairleigh Dickinson University, Teaneck, NJ, USA

Catherine R. Drury, Robert E. McGrath & Katharine L. Loeb

Department of Psychiatry and the Behavioral Sciences, University of Southern California, Los Angeles, CA, USA

Stuart B. Murray

Department of Paediatrics, The University of Melbourne, Melbourne, Australia

Elizabeth K. Hughes

Murdoch Children’s Research Institute, Melbourne, Australia

Elizabeth K. Hughes & Susan M. Sawyer

Department of Psychiatry and Behavioral Neuroscience (emeritus), The University of Chicago, Chicago, IL, USA

Daniel Le Grange

Chicago Center for Evidence-Based Treatment, Chicago, IL, USA

Katharine L. Loeb

You can also search for this author in PubMed   Google Scholar

Contributions

KLL developed the PEDE and conceptualized the current study with LH, REM, and SBM. EKH, SMS, DLG, and KLL provided data for the study, which was curated and analyzed by LH under the supervision of REM and KLL. CRD contributed additional analyses with guidance from KLL and LH. LH wrote the initial draft of the manuscript, to which CRD, KLL, and REM also contributed. All authors read and edited subsequent iterations of the manuscript and approved the final version.

Corresponding author

Correspondence to Catherine R. Drury .

Ethics declarations

Ethics approval and consent to participate.

This study was determined to qualify as exempt by Fairleigh Dickinson University’s Institutional Review Board (IRB). Any parent studies from which data were derived for secondary analyses included informed consent/assent and were approved by site-specific IRBs.

Consent for publication

Not applicable.

Competing interests

KLL receives royalties from Cambridge University Press and Routledge, and is a faculty member of and consultant for the Training Institute for Child and Adolescent Eating Disorders. DLG receives royalties from Guilford Press and Routledge, and is co-director of the Training Institute for Child and Adolescent Eating Disorders, LLC. SBM receives royalties from Oxford University Press, Routledge, and Springer.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hail, L., Drury, C.R., McGrath, R.E. et al. Parent version of the Eating Disorder Examination: Reliability and validity in a treatment-seeking sample. J Eat Disord 12 , 101 (2024). https://doi.org/10.1186/s40337-024-01062-4

Download citation

Received : 29 March 2024

Accepted : 10 July 2024

Published : 18 July 2024

DOI : https://doi.org/10.1186/s40337-024-01062-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Eating disorder
  • Anorexia nervosa
  • Adolescents

Journal of Eating Disorders

ISSN: 2050-2974

  • General enquiries: [email protected]

validity and reliability in research example qualitative

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • About YRBSS
  • YRBSS Results
  • Data and Documentation
  • YRBSS Methods
  • YRBSS Questionnaires
  • YRBSS Reports and Publications
  • Data Summary & Trends
  • Results Toolkit
  • Communication Resources
  • YRBSS Data Request Form

Related Topics:

  • View All Home
  • YRBS Explorer
  • Adolescent and School Health

YRBSS Frequently Asked Questions

  • The YRBSS FAQ page provides information to help you learn more about the surveillance system and its use.

Computer screen with data visualization displays and several notebooks on the right side.

Top 5 YRBSS FAQs

Are yrbss results available by zip code, census tract, school, local school district, or county are yrbss results available for my town, city, or local school district.

YRBSS data are not available by zip code, census tract, or school. Sample size limitations and confidentiality requirements do not support analyses at these levels.

YRBSS data are available for a small number of specifically funded local school districts or counties. CDC funds certain local school districts to conduct the YRBS. Some of those local school districts are county-based. See Participation Maps & History for more information about county-based local school districts with YRBSS data. Data are only available for local school districts or counties on the list. No other local YRBSS data are available.

County-level identifiers are not available in the National YRBS data set or in most state data sets.

When are updated YRBSS results released?

Most YRBSs are conducted during the spring of odd-numbered years and results are released in the summer of the following year. For example, results from the 2019 national, state, and local YRBSs were released in an MMWR Surveillance Summary during the summer of 2020. The specific release date for a given cycle is posted on the YRBSS home page as soon as it has been determined.

What is the suggested citation for the YRBSS questionnaire or YRBSS data in a publication?

YRBSS questionnaires should be cited as follows:

Centers for Disease Control and Prevention. [survey year] Youth Risk Behavior Survey Questionnaire. Accessed [date]. www.cdc.gov/yrbs.

Do I need permission to use YRBSS questionnaires for my study, area, local school district, or school? Is there a cost?

The YRBSS questionnaires are in the public domain, and no permission is required to use them. You may download and use the questionnaires as is or with changes at no charge.

Are the YRBSS questionnaires available in languages other than English?

Yes. Beginning with the 2021 cycle, the National YRBS questionnaire is available in Spanish. Translation of state and local YRBSS questionnaires is left to the discretion of state and local agencies.

YRBS questionnaires are designed to be administered in a school setting. It is important to consider the language used in regular classrooms and common second languages, if any, spoken by the student population. Check with school officials before deciding whether translation is needed.

YRBSS questionnaires in English and Spanish are in the public domain. Questionnaires may be translated to any language. No specific permission is required.

Uses of YRBSS results

How are the yrbss results used.

State, territorial, and tribal governments, as well as local agencies and nongovernmental organizations use YRBSS data to set and track progress toward meeting school health and health promotion program goals.

They also use YRBSS data to support modification of school health curricula or other programs, support new legislation and policies that promote health, and seek funding and other support for new initiatives.

CDC and other federal agencies routinely use YRBSS data to assess trends in priority health behaviors among high school students, monitor progress toward achieving national health objectives, and evaluate the contribution of broad prevention efforts in schools and other settings.

These activities support efforts to reduce health risk behaviors among young people.

Where can I find more information on using YRBSS data?

Uses of Youth Risk Behavior Survey and School Health Profiles Data: Applications for Improving Adolescent and School Health

Foti K, Balaji A, Shanklin S. J Sch Health . 2011;81(6):345–354.

Can student behavior changes over time be tracked using the YRBSS?

Yes. The YRBSS tracks aggregate changes in student behavior over time.

Does the YRBSS track specific students over time?

No. A new sample of schools and students is drawn for each survey cycle. Students who participated cannot be tracked because no identifying information is collected.

Is it appropriate to report prevalence estimates for any of the U.S. racial or ethnic subgroups (such as, American Indian or Alaska Native students) asked about on the National YRBS questionnaire?

Although prevalence estimates generated for students in each racial or ethnic subgroup are representative of these students nationally, caution should be used when analyzing and interpreting these data. Because of the small numbers of students in some racial or ethnic subgroups who participate in any single National YRBS, the estimates may lack precision. Precision can be improved by combining multiple years of National YRBS data.

Analyzing YRBSS data

What software should i use to analyze yrbss data.

See Software for Analysis of YRBS Data for a review of software packages suitable for analyzing YRBSS data and guidance on how to use them.

How are the national, state, territory, tribal government, and local YRBS data different? Are the national data the aggregate of the state and other YRBS data?

National, state, territorial, and tribal government data and local data come from separate scientific samples of schools and students. National YRBS data are not the aggregate of the state YRBS data sets. State, territorial, and tribal government data and local YRBS data are not subsets of the National YRBSS data set. Both national, state, territory, tribal government YRBSs and local YRBSs all follow the same survey methodology and use the same core questionnaire.

Is there a national middle school YRBS?

No. However, middle school results are available for some states, districts, territories, and tribes that have elected to conduct a middle school YRBS in their jurisdiction. Middle school YRBS results are available on Youth Online .

Can I calculate state-level estimates of a health behavior using the National YRBS data?

No. The National YRBS was not designed to produce representative estimates at the state level.

Is it possible to analyze associations between state-level characteristics and student-level behaviors using the National YRBS data?

This type of analysis has significant limitations and should be conducted with caution. A state-level characteristic, such as the presence of a state law, can be added to a regression model as an exogenous (independent) variable and will yield statistically correct estimates. However, it is important to fully consider the context of these estimates.

The National YRBS was not designed to produce representative estimates at the state level. The number of students chosen from each state varies considerably and is usually too small to generate precise or stable state-level estimates. In addition, fewer than 50 states are included in the national sample each cycle.

Researchers should fully investigate the implications and interpretations of this type of analysis and should understand the sampling design of the National YRBS and how that design might influence their results. See Methodology of the Youth Risk Behavior Surveillance System for more information about the National YRBS sampling design.

Can I calculate prevalence estimates by urban/rural status? Is an urban/rural identifier available for the National YRBS data sets?

No. The National YRBS was not designed to produce estimates by urban or rural status. In the National YRBS, primary sampling units (PSUs) are selected based on urban and non-urban definitions, but it does not necessarily follow that a non-urban area is rural. Urban status indicates only that the PSU was one of the largest 54 metropolitan statistical areas (MSAs). Non-urban indicates that the PSU was not one of the largest 54 MSAs. It could be rural but is not necessarily rural. See Methodology of the Youth Risk Behavior Surveillance System for more information about the National YRBS sampling design.

Conducting your own YRBS

Do i need permission to use yrbss questionnaires for my study, area, district, or school is there a cost.

The YRBSS questionnaires are in the public domain, and no permission is required to use them. You may download the questionnaires at no charge.

How do I conduct a YRBS in my area, district, or school?

See A Guide to Conducting Your Own Youth Risk Behavior Survey for information useful to communities and groups that plan to conduct their own YRBS.

If I conduct a YRBS, can CDC help me scan, process, or tabulate my data?

CDC provides data processing assistance only to states, territories, and local school districts that it funds directly to conduct a YRBSS. However, information on how the data are processed can be found on the Methods page and in the Methodology of the Youth Risk Behavior Surveillance System .

Is funding available for conducting a YRBS?

CDC has funding available for all 50 states, a small number of territories, and large urban school districts during each 5-year funding cycle.

Validity and reliability

Do students tell the truth on the yrbs questionnaire.

Research indicates that data of this nature may be gathered as credibly from adolescents as from adults. Internal reliability checks help identify the small percentage of students who falsify their answers. To obtain truthful answers, survey administrators must ensure that students perceive the survey as important and know that procedures have been developed to protect their privacy and allow for anonymous participation.

What kinds of validation or reliability studies have been done on the YRBS questionnaire?

The Methodology of the Youth Risk Behavior Surveillance System contains a description of most of the methodological studies conducted to date on the YRBSS questionnaires or YRBSS data collection procedures. In addition, the list of YRBSS MMWR publications and journal articles contains the actual journal articles describing the results of these studies.

These methodological studies include test-retest reliability studies on the 1991 and 1999 versions of the questionnaire; a study assessing the validity of self-reported height and weight; a study assessing the effect of changing the race or ethnicity question; a study examining how varying honesty appeals, question wording, and data-editing protocols affect prevalence estimates; and a study examining how varying the mode and setting of survey administration affects prevalence estimates.

Youth Risk Behavior Surveillance System (YRBSS)

YRBSS is the largest public health surveillance system in the U.S, monitoring multiple health-related behaviors among high school students.

validity and reliability in research example qualitative

Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.opt. It's important to consider reliability and validity when you are creating your research design, planning your methods, and writing up your results, especially in quantitative research. Failing to do so can lead to several types of research ...

Hence, the essence of reliability for qualitative research lies with consistency.[24,28] A margin of variability for results is tolerated in qualitative research provided the methodology and epistemological logistics consistently yield data that are ontologically similar but may differ in richness and ambience within similar dimensions.

Validity and reliability or trustworthiness are fundamental issues in scientific research whether qualitative, quantitative, or mixed research. It is a necessity for researchers to describe which ...

Although the tests and measures used to establish the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evaluate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integrity and ...

Validity and reliability of research and its results are important elements to provide evidence of the quality of research in the organizational field. However, validity is better evidenced in quantitative studies than in ... validity in qualitative research. Examples are the works of Aubin-Auger et al. (2008), Ergene,

Revised on 10 October 2022. Reliability and validity are concepts used to evaluate the quality of research. They indicate how well a method, technique, or test measures something. Reliability is about the consistency of a measure, and validity is about the accuracy of a measure. It's important to consider reliability and validity when you are ...

As with validity, reliability is an attribute of a measurement instrument - for example, a survey, a weight scale or even a blood pressure monitor. But while validity is concerned with whether the instrument is measuring the "thing" it's supposed to be measuring, reliability is concerned with consistency and stability.

Kirk and Miller (1986) identify three types of reliability referred to in quantitative research, which relate to: (1) the degree to which a measurement, given repeatedly, remains the same (2) the stability of a measurement over time; and (3) the similarity of measurements within. a given time period (pp. 41-42).

Validity is the extent to which a study accurately measures what it intends to measure, while reliability refers to the consistency and stability of the results (Kimberlin & Winterstein, 2008 ...

The purpose of this article is to reestablish reliability and validity as appropriate to qualitative inquiry; to identify the problems created by post hoc assessments of qualitative research; to review general verification strategies in relation to qualitative research, and to discuss the implications of returning the responsibility for the ...

Cypress B. S. (2017). Rigor or reliability and validity in qualitative research: Perspectives, strategies, reconceptualization, and recommendations. Dimensions ... Contextualizing reliability and validity in qualitative research: Toward more rigorous and trustworthy qualitative social science in leisure research. Journal of Leisure Research, 5 ...

Reliability in qualitative research refers to the stability of responses to multiple coders of data sets. It can be enhanced by detailed field notes by using recording devices and by transcribing the digital files. However, validity in qualitative research might have different terms than in quantitative research. Lincoln and Guba (1985) used "trustworthiness" of ...

To widen the spectrum of conceptualization of reliability and revealing the congruence of. reliability and validity in qualitative research, Lincoln and Guba (1985) states that: "Since there. can ...

In Quantitative research, reliability refers to consistency of certain measurements, and validity - to whether these measurements "measure what they are supposed to measure". Things are slightly different, however, in Qualitative research. Reliability in qualitative studies is mostly a matter of "being thorough, careful and honest in ...

However, the increased importance given to qualitative information in the evidence-based paradigm in health care and social policy requires a more precise conceptualization of validity criteria that goes beyond just academic reflection. After all, one can argue that policy verdicts that are based on qualitative information must be legitimized by valid research, just as quantitative effect ...

What is the difference between reliability and validity in a study? In the domain of research, whether qualitative or quantitative, two concepts often arise when discussing the quality and rigor of a study: reliability and validity.These two terms, while interconnected, have distinct meanings that hold significant weight in the world of research.

ting to understand the notion of validity in qualitative inquiry.There is a general consensus, however, that qualitat. ve inquirers need to demonstrate that their studies are credible. To this end, severa. authors iden-tify common procedures for establishing validity inJohn W. Creswell is professor of educational psychology at the University of ...

Reliability and validity are equally important to consider in qualitative research. Ways to enhance validity in qualitative research include: Building reliability can include one or more of the following: The most well-known measure of qualitative reliability in education research is inter-rater reliability and consensus coding.

least a qualitative research study, and the need to ensure practitioners can have confidence in the results, conclusions, and recommendations of such work. Validity and reliability are defined from a qualitative research perspective and various techniques described which can be utilised to help ensure investigative rigour. As one of

Reliability refers to the consistency of the measurement. Reliability shows how trustworthy is the score of the test. If the collected data shows the same results after being tested using various methods and sample groups, the information is reliable. If your method has reliability, the results will be valid. Example: If you weigh yourself on a ...

the validity and reliability of quantitative research cannot be applied to qualitative research, there are ongoing debates about whether terms such as validity, reliability and generalisability are appropriate to evalu-ate qualitative research.2-4 In the broadest context these terms are applicable, with validity referring to the integ-

Aims to address a gap in the literature about quality criteria for validity and reliability in qualitative research within the realism scientific paradigm. Six comprehensive and explicit criteria for judging realism research are developed, drawing on the three elements of a scientific paradigm of ontology, epistemology and methodology.

100 The Qualitative Report 2019. Currently, quantitative researchers show a certain consensus about the use, importance. and especially the operationalization of the validity itself (Cronbach ...

Qualitative data in research. Qualitative data research methods allow analysts to use contextual information to create theories and models. These open- and closed-ended questions can be helpful to understand the reasoning behind motivations, frustrations, and actions—in any type of case. Some examples of qualitative data collection in research:

In order to assess the reliability and validity of the PEDE in a relatively homogenous sample, this study focused specifically on youth presenting to these sites with probable AN or SAN , a site-specific research category that would fall under other specified feeding and eating disorder (OSFED) in DSM-5-TR nomenclature.

Example: Research using only qualitative methods to explore patients' subjective experiences in a hospital setting. Methodological delimitations specify the research methods (qualitative) that will be employed to gather and analyze data, reflecting the study's focus on subjective experiences. 3. Sample size restrictions.

Subsequently, the research methodology (sample, data collection methods, data analysis methods) will be elucidated, along with clarification of the research findings and discussion of their implications. ... Moreover, the scale's reliability and validity meet the standards, with Cronbach's α values for various dimensions of motivation and ...

These methodological studies include test-retest reliability studies on the 1991 and 1999 versions of the questionnaire; a study assessing the validity of self-reported height and weight; a study assessing the effect of changing the race or ethnicity question; a study examining how varying honesty appeals, question wording, and data-editing ...

  • business plan
  • course work
  • research paper

Home

Crash of a Tupolev TU-154B-1 in Omsk: 178 killed

cruisers yachts 38 gls for sale

Show Map

IMAGES

  1. Cruisers Yachts 2019 38 Gls 38 Yacht for Sale in US

    cruisers yachts 38 gls for sale

  2. 2022 Cruisers Yachts 38 GLS Bowrider for sale

    cruisers yachts 38 gls for sale

  3. 2022 Cruisers Yachts 38 GLS OB SOUTH BEACH Sports Cruiser for sale

    cruisers yachts 38 gls for sale

  4. 2023 Cruisers 38 Gls New 2023 Cruisers 38 GLS

    cruisers yachts 38 gls for sale

  5. 2021 Cruisers Yachts 38 GLS Express Cruiser for sale

    cruisers yachts 38 gls for sale

  6. Cruisers Yachts 2019 38 Gls 38 Yacht for Sale in US

    cruisers yachts 38 gls for sale

VIDEO

  1. For Sale

  2. 2022 Cruisers Yachts 38 GLS South Beach OB

  3. 2024 Cruisers Yachts 38, GLS Outboard with Triple V10 Mercury, 400’S!

  4. For Sale

  5. Cruisers Yachts 38 GLS I/O Walkthrough

  6. PRE-OWNED CRUISERS YACHTS 38 GLS OUTBOARD

COMMENTS

  1. Cruisers Yachts 38 Gls boats for sale

    The starting price is $405,000, the most expensive is $970,609, and the average price of $699,000. Related boats include the following models: 50 Cantius, 42 Cantius and 420 Express. Boat Trader works with thousands of boat dealers and brokers to bring you one of the largest collections of Cruisers Yachts 38 gls boats on the market. You can ...

  2. Cruisers Yachts 38 Gls boats for sale

    Find Cruisers Yachts 38 Gls boats for sale in your area & across the world on YachtWorld. Offering the best selection of Cruisers Yachts to choose from. ... 2022 Cruisers Yachts 38 GLS. US$739,000. Silver Seas Yachts - San Diego | Newport Beach, California. 2021 Cruisers Yachts 60 Cantius Flybridge. US$1,690,000. OLYMPIA YACHT GROUP | Anna ...

  3. Cruisers Yachts 38 Gls for sale

    Cruisers Yachts 38 Gls for sale 45 Boats Available. Currency $ - USD - US Dollar Sort Sort Order List View Gallery View Submit. Advertisement. Save This Boat. Cruisers Yachts 38 GLS OB. 2024. Request Price. The 38 GLS OB for sale at your local dealer combines the unmatched performance and entertainment capabilities of the 38 GLS with powerful ...

  4. The Cruisers Yachts 38 GLS

    Cruisers Yachts 38 GLS I/O. With a drop-down beach door, hydraulic swim platform, and spacious bow lounge, the 38 GLS I/O for sale near you brings your day boating to the next level. Request Information. Features. Cockpit. The open-concept cockpit was designed with entertainment in mind. You can find endless seating options between the bow ...

  5. Cruisers Yachts 38 Gls Ob boats for sale

    Find Cruisers Yachts 38 Gls Ob boats for sale in your area & across the world on YachtWorld. Offering the best selection of Cruisers Yachts to choose from. ... 2024 Cruisers Yachts 38 GLS OB. Request price. SkipperBud's Marina Del Isle | Marblehead, United States. Request Info; In-Stock; 2020 Cruisers Yachts 38 GLS OB. US$639,000. US $4,855/mo.

  6. 2021 Cruisers Yachts 38 GLS I/O Cruiser for sale

    For Sale: 2021 Cruisers Yachts 38 GLS - Your Gateway to Luxury and Performance** If you're in the market for a yacht that combines top-tier craftsmanship, exhilarating performance, and unmatched versatility, the 2021 Cruisers Yachts 38 GLS is the vessel for you. With only 185 hours of freshwater use and meticulously maintained, this yacht is ...

  7. 2020 Cruisers 38 Gls Yacht For Sale

    2021 CRUISERS YACHTS 38 GLS. $499,995. Wilmington, North Carolina. Details. Compare. Explore this 2020 Cruisers Yachts 38 GLS for sale. is located in Miami, view photos, yacht description, priced at $639,000.

  8. A License To Chill Cruisers Yachts 38' 2023 Jupiter, Florida

    A License To Chill is a 2023 Cruisers Yachts 38' 38 GLS OB listed for sale with United Yacht Broker John Blumenthal. John can be reached at 1-772-215-2571 to answer any questions you may have on this boat. ... Our Cruisers Yachts listing is a great opportunity to purchase a 38' Bowrider for sale in Jupiter, Florida - United States. This ...

  9. Cruisers Yachts 38 Gls I O for sale

    Cruisers Yachts 38 Gls I O for sale 4 Boats Available. Currency $ - USD - US Dollar Sort Sort Order List View Gallery View Submit. Advertisement. Save This Boat. Cruisers Yachts 38 GLS I/O. 2024. Request Price. With a drop-down beach door, hydraulic swim platform, and spacious bow lounge, the 38 GLS I/O for sale near you brings your day boating ...

  10. Used Cruisers 38 GLS Yachts For Sale

    About Cruisers Yachts 38 GLS. The 38 GLS is a versatile boat that combines a bowrider and outboard with the craftsmanship you expect from Cruisers Yachts. Its open-concept cockpit is designed for entertainment, offering endless seating options. With triple 300 Mercury Verados and a top speed of 53mph, the 38 GLS delivers impressive performance ...

  11. Cruisers Yachts 38 GLS Outboard

    Discover Cruisers Yachts 38 GLS innovative design featuring luxury amenities and triple Mercury Verados. Relax and ride in style in the 38 GLS. ... The 38 GLS OB for sale at your local dealer combines the unmatched performance and entertainment capabilities of the 38 GLS with powerful, easy-to-maintain outboards. Expand your swimming area by ...

  12. Buy New Cruisers 38 GLS Yachts For Sale

    Cruisers Yachts 38 GLS. The 38 GLS is breaking boundaries with its versatility, bringing together a bowrider and outboard with the top-notch craftsmanship you'd expect from Cruisers. The open-concept cockpit is all about entertainment, offering a variety of seating options from the bow lounge to the mid-ship dinettes and aft-facing bench.

  13. 2020 Cruisers Yachts 38 GLS Bowrider for sale

    2020 38 Cruisers GLS "Perfect Day" for sale. This 2020 38 GLS is one of the only available with triple 400 Mercury Verados. The 400's push the boat to 49 knots WOT with a 32-knot cruise at a .97 MPG. ... 2020 Cruisers Yachts 38 GLS. US$629,000. Cape Coral, Florida. 2020 Fountaine Pajot Saona 47. US$1,095,000. Saint George, Grenada. 2007 ...

  14. Cruisers Yachts 38 boats for sale

    The starting price is $405,000, the most expensive is $970,609, and the average price of $699,000. Related boats include the following models: 50 Cantius, 42 Cantius and 38 GLS. Boat Trader works with thousands of boat dealers and brokers to bring you one of the largest collections of Cruisers Yachts 38 boats on the market.

  15. Cruisers Yachts 38 Gls for sale in United States

    View a wide selection of Cruisers Yachts 38 Gls for sale in United States, explore detailed information & find your next boat on boats.com. #everythingboats ... The 38 GLS OB for sale at your local dealer combines the unmatched performance and entertainment capabilities of the 38 GLS with powerful, easy-to-maintain outboards. ...

  16. 60 ft used yachts for sale

    Omsk Oblast, Russia Offline Map For Travel & Navigation is a premium, very easy to use and fast mobile application. EasyNavi has developed the Omsk Oblast, Russia Offline Map For

  17. power boat catamaran manufacturers

    322. IMAGES. Sunreef Launches Second Sunreef 80 Power Catamaran. Leopard 51 Powercat Power Catamaran The Office for sale. The Best Luxury Power Catamarans Manufacturers. Horizon PC52 Power Catamaran. New power Catamaran for sale: 2019 Lagoon 630MY (63ft) Twin Vee Goes Electric: Sneak Peak Of New Electric-Powered Catamaran.

  18. 2024 Cruisers Yachts 38 GLS Bowrider for sale

    Description. 2024 38 Cruisers Yachts GLS offers an open concept with entertaining in mind. This 38 GLS features everything you love about the Cantius series with Triple Mercury engines you will arrive at your destination in no time, not to forget to mention that getting in and out of the water is easy with the lowering beach door and ...

  19. Crash of a Tupolev TU-154B-1 in Omsk: 178 killed

    Other fatalities: 4. Total fatalities: 178. Circumstances: Following an uneventful flight from Krasnodar, the crew started the approach to Omsk Airport in a reduced visibility due to the night and rain falls. The aircraft landed at a speed of 270 km/h and about one second later, the captain noticed the presence of vehicles on the runway.

  20. Cruisers Yachts 38 Gls Ob boats for sale

    The starting price is $679,000, the most expensive is $720,000, and the average price of $695,000. Related boats include the following models: 50 Cantius, 42 Cantius and 38 GLS. Boat Trader works with thousands of boat dealers and brokers to bring you one of the largest collections of Cruisers Yachts 38 gls ob boats on the market. You can also ...

  21. Cruisers Yachts 38 Gls Outboard boats for sale

    Got a specific Cruisers Yachts 38 gls outboard in mind? There are currently 6 listings available on Boat Trader by both private sellers and professional boat dealers. The oldest boat was built in 2024 and the newest model is 2024. Related boats include the following models: 50 Cantius, 42 Cantius and 38 GLS.

  22. Omsk Oblast (Russia): Cities and Settlements in Population

    Contents: Cities and Settlements The population of all cities and urban settlements in Omsk Oblast according to census results and latest official estimates. The icon links to further information about a selected place including its population structure (gender).

  23. Cruisers Yachts 38 Gls I O boats for sale

    The starting price is $595,000, the most expensive is $599,900, and the average price of $597,450. Related boats include the following models: 50 Cantius, 42 Cantius and 38 GLS. Boat Trader works with thousands of boat dealers and brokers to bring you one of the largest collections of Cruisers Yachts 38 gls i o boats on the market.