Sebanyak 35 item atau buku ditemukan

QualiBuddy: an Online Tool to Improve Research Skills in Qualitative Data Analysis

Abstract : Purpose: Novice researchers experience difficulties in analysing qualitative data. To develop the skills necessary for qualitative data analysis, theoretical manuals are often insufficient. Supervisors supporting students in analysing qualitative data stress the need for practical guidance, including exercises and feedback. The purpose of this paper is to present and discuss QualiBuddy, an interactive online support tool in answer to this need. Design/methodology/approach: An online support tool was developed in answer to existing problems regarding analysing qualitative data. The tool provides a learning trajectory of 11 stages of analysis, which all contain examples, exercises, feedback, verification questions and questions for reflection. This tool is developed from a multidisciplinary perspective and is constructed around various steps. During the development process, internal feedback from the members of the project team, as well as external feedback from an international steering group with experts in qualitative research were taken into account. Findings: The tool QualiBuddy is based on an empirically and theoretically grounded approach to qualitative data analyses. Pilot tests with experienced qualitative researchers suggest that the tool potentially allows novice researchers from various domains to develop and improve their skills in conceptualising interview data, specifically within a grounded theory approach. Originality/value: QualiBuddy is a newly developed interactive online education tool based on and complementary to existing guides for qualitative data analysis.

Abstract : Purpose: Novice researchers experience difficulties in analysing qualitative data. To develop the skills necessary for qualitative data analysis, theoretical manuals are often insufficient.

Scalable and Holistic Qualitative Data Cleaning

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and wrong business decisions. Poor data across businesses and the government cost the U.S. economy 3.1 trillion a year, according to a report by InsightSquared in 2012. Data scientists reportedly spend 60% of their time in cleaning and organizing the data according to a survey published in Forbes in 2016. Therefore, we need effective and efficient techniques to reduce the human efforts in data cleaning. Data cleaning activities usually consist of two phases: error detection and error repair. Error detection techniques can be generally classified as either quantitative or qualitative. Quantitative error detection techniques often involve statistical and machine learning methods to identify abnormal behaviors and errors. Quantitative error detection techniques have been mostly studied in the context of outlier detection. On the other hand, qualitative error detection techniques rely on descriptive approaches to specify patterns or constraints of a legal data instance. One common way of specifying those patterns or constraints is by using data quality rules expressed in some integrity constraint languages; and errors are captured by identifying violations of the specified rules. This dissertation focuses on tackling the challenges associated with detecting and repairing qualitative errors. To clean a dirty dataset using rule-based qualitative data cleaning techniques, we first need to design data quality rules that reflect the semantics of the data. Since obtaining data quality rules by consulting domain experts is usually a time-consuming processing, we need automatic techniques to discover them. We show how to mine data quality rules expressed in the formalism of denial constraints (DCs). We choose DCs as the formal integrity constraint language for capturing data quality rules because it is able to capture many real-life data quality rules, and at the same time, it allows for efficient discovery algorithm. Since error detection often requires a tuple pairwise comparison, a quadratic complexity that is expensive for a large dataset, we present a distribution strategy that distributes the error detection workload to a cluster of machines in a parallel shared-nothing computing environment. Our proposed distribution strategy aims at minimizing, across all machines, the maximum computation cost and the maximum communication cost, which are the two main types of cost one needs to consider in a shared-nothing environment. In repairing qualitative errors, we propose a holistic data cleaning technique, which accumulates evidences from a broad spectrum of data quality rules, and suggests possible data updates in a holistic manner. Compared with previous piece-meal data repairing approaches, the holistic approach produces data updates with higher accuracy because it realizes the interactions between different errors using one representation, and aims at generating data updates that can fix as many errors as possible.

Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and wrong business decisions.

The SAGE Handbook of Qualitative Data Collection

How we understand and define qualitative data is changing, with implications not only for the techniques of data analysis, but also how data are collected. New devices, technologies and online spaces open up new ways for researchers to approach and collect images, moving images, text and talk. The SAGE Handbook of Qualitative Data Collection systematically explores the approaches, techniques, debates and new frontiers for creating, collecting and producing qualitative data. Bringing together contributions from internationally leading scholars in the field, the handbook offers a state-of-the-art look at key themes across six thematic parts: Part I Charting the Routes Part II Concepts, Contexts, Basics Part III Types of Data and How to Collect Them Part IV Digital and Internet Data Part V Triangulation and Mixed Methods Part VI Collecting Data in Specific Populations

The SAGE Handbook of Qualitative Data Collection systematically explores the approaches, techniques, debates and new frontiers for creating, collecting and producing qualitative data.

Analysis of Qualitative Data

New Developments

Analysis of Qualitative Data

Readers familiar with factor analysis may find parallels between factor analysis
models for continuous data and latent-class models for discrete data. The
treatment of latent-class analysis in this chapter is most closely related to work of
Goodman (1974a,b) and Haberman (1974c, 1976b, 1977). This work builds on a
substantial earlier literature. The most extensive treatment of latent-class analysis
is in Lazarsfeld and Henry (1968). Important earlier papers include Lazarsfeld (
1950a,b), ...