Information Sources:2021-06-21Information Sources:
CURRENT LOCATION:HOME | METHODOLOGY | 2022-2023
Sampling Design and Executive Report for the 2022-2023 Survey
Font size selection:SNL


    The 2022-2023 China Working Conditions Research Project was completed by the China Working Conditions Research Project Team (the “project team”). Relevant surveys were uniformly organized by China Statistical Information Services Center of National Bureau of Statistics of China (Social Conditions and Public Opinion Survey Center of National Bureau of Statistics of China) (the “Social and Public Survey Center of NBS”) and jointly implemented by local social and public survey institutions at provincial (regional and municipal) levels. Sampling design, quality control design, project implementation, data audit and weight design related the project are described as follows.


Ⅰ. Sampling design

(Ⅰ) Target population

    The target population of the 2022-2023 China Working Conditions Survey is the employed population aged 16 and over living in family households in cities of China. Specifically, “aged 16 and over” means potential respondents were at least 16 years old when surveyed. “Living in family household” means living in proprietary or rented houses, instead of venues owned or controlled by various organizations like collective dormitories, elderly care institutions, prisons, military camps, hotels, for more than half a year in the surveyed city (regardless of whether the respondent has local household registration). Members of a family household include those who are on business trips, study or visit relatives and friends and share the same meals and accommodation in the household. “Employed” means a respondent spent at least 1 hour engaging himself/herself in labor, service, creation or work in exchange for wage or profit one week (Monday to Friday) before the survey time. “Cities” mean urban areas of municipal districts and county-level cities in China. According to the Provisions on Urban-rural Division for Statistical Purposes issued by National Bureau of Statistics, “urban areas” here refer to residential committees and other areas connected to the actual construction of residences of governments of municipal districts and county-level cities. The total size of the samples for the survey design is 8,100.

(Ⅱ) Sampling method and process

    This survey adopts a complex sampling design integrating stratified sampling, probability proportionate to size sampling (PPS) and simple random sampling.

    In this survey, 973 municipal districts and 388 county-level cities (1,361 county-level administrative units in total) are regarded as the primary sampling units (PSUs). Considering the difference in economic development levels across different regions, this survey divides PSUs into three layers based on city size in order to improve the representativeness of samples and reduce sampling error: First, the super-large city layer comprised of Beijing, Shanghai, Guangzhou and Shenzhen; second, the large city layer comprised of the direct-administered municipalities, provincial capital cities and non-provincial capital sub-provincial cities, excluding super-large cities; third, the medium and small city layer comprised of all municipal districts and county-level cities excluding those from the super-large city layer and large city layer. On the basis of stratification, a 4-class, multistage sampling design was further implemented. The first three stages of sampling frames were developed based on the 2021 Seventh National Population Census data, and the end-stage sampling frame was established during the door-to-door interview period. Specifically, the issuer has taken the following measures:

    First, county-level administrative units (including 973 municipal districts and 388 county-level cities, namely, 1,361 county-level administrative units in total) as the PSUs were categorized into three layers based on city size: the super-large city layer, large city layer and medium and small city layer. With the working population aged 16-60 as supplementary information[1], 5, 15 and 40 PSUs were sampled from the three layers respectively based on the PPS principle (the larger the working population, the more likely it will be sampled), deriving a total of 60 PSUs.

[1] Given the unavailability of working population data, the working population aged 16-60 derived from the Seventh National Population Census was used as an alternative. Working population across all stages of PPS sampling was replaced this way.


    Second, communities were regarded as the secondary sampling units (SSUs). With the working population aged 16-60 as supplementary information, 9 communities were sampled from each of the 60 PSUs, deriving a total of 540 SSUs.

    Third, family households were considered as third sampling units (TSUs). According to project design, the sample size of each community was 15. However, considering the non-response, refusal and other problems, 60 households were sampled as contact targets based on the simple random principle in the TSU sampling stage. The addresses of contact households were issued in multiple batches, with 30 addresses released in the first batch. While contacting the addresses to be surveyed, interviewers were not allowed to conduct survey at addresses that were not included in the list. For sampled households that still could not be reached after being contacted three times, the interviewer needed to specify the reason in the relevant section of the Registration Form of Home Visits before moving on to the next household. In case of a failure to obtain 15 valid samples after contacting 30 households, the interviewer may re-apply for new addresses, 15 each time.

    Fourth, the working population within households were regarded as the ultimate sampling units (USUs). The interviewer randomly selected 1 working person from a household using the Kish grid as the respondent. The questionnaire-based interview would begin if the selected respondent agreed to accept the interview. However, if the selected respondent refuse or could not accept the interview, the interviewer should record it truthfully on the Registration Form of Home Visits. Regardless of the reason for a failed visit, the interviewer was prohibited from switching a selected respondent to another member of the household. The survey of a community should be terminated once 15 samples were collected.

    The sampling design for each stage is shown in Table 1.


1698452539484.png


(Ⅲ) Sample size design

    For simple random sampling with replacements or massive simple random sampling without replacements, we have the following equation of estimated sample size:

1698452612328(1).png

    Where 1698112001153(1).png is the frequency of a class of samples in the population, 1698112015511.png is the population variance; 1698112033733(1).png is the distribution critical value when the confidence level is 1698112048085(1).png; and 1698112063607(1).png is the difference value between sample estimate and population parameter, namely, the sampling error. For the above equation, if we set 1698112001153(1).png as 0.5[1], the confidence level of estimated interval 1698112048085(1).png as 0.05 and the absolute error 1698112063607(1).png as 3%, then, for the estimation of the majority of distributions, we only need to survey about 1,000 samples.

[1] In this case, the maximum population variance is 0.5×(1-0.5)= 0.25.


    However, given the fact that this survey used a multistage complex sampling design instead of simple random sampling, we must also take into account of the design effect (1698112131948(1).png). The design effect is the ratio of sample variance generated when using the complex sampling to the sample variance generated when using the simple random sampling under the same sample size. The estimation equation of the design effect is as follows:

1698112155246(1).png

    Where b is the number of samples selected from a single sampling unit; and roh is the homogeneity within the sampling unit. This equation indicates that the larger the number of samples selected from a single sampling unit, the larger the sampling effect; the larger the homogeneity within the sampling unit, the larger the design effect. Based on the sampling design of this survey and our prior sampling experience, we set the design effect as 5, and hence the sample size taking into account of design effect is 1000×55000.

    To obtain an unbiased parameter estimate, a certain level of response rate must be ensured for a social survey: Methodologically, the target population can be divided into two potential populations by whether a response is given: the population available for survey and the population unavailable for survey. The size of the former is calculated by response rate * target population size; while the latter by “(1- response ratetarget population size. The lower the response rate, the smaller the population size that can be inferred by the sample estimate. Only by assuming that there is no statistically significant difference between inferred parameters for available and unavailable populations can we generalize the survey results to all population members in the presence of nonresponses. The rule of thumb is that we should at least ensure a 50% or higher response rate in sampling surveys (that is, both available and unavailable populations account for half of the target population). Considering the presence of nonresponses in the survey, we need to appropriately increase the size of the selected samples. Based on prior experience, we assumed that the response rate of the survey was 65%. Thus, considering the presence of nonresponses, the sample size for this survey should be 5000÷ 0.65≈ 7692. Further considering the specific sample distributions, we determined the final sample size as 8100 (=60× 9× 15), which is made up of samples from 60 county-level administrative units, with 9 communities sampled from each county-level administrative unit, 15 households sampled from each community and 1 person sampled from each household.


Ⅱ. Design of quality control

    A 2-class, 3-stage quality control scheme was implemented for the survey. The project team was responsible for the design and coordination with the entire quality process, and Social Survey Center of NBS and local survey institutions for the implementation of the specific quality plans. In the meantime, data quality was audited and finalized by the project team.

(Ⅰ) Pre-survey control

    1. Establishing the special survey group and provincial-level executive groups

    Firstly, the special survey group was established. A leader from Social Survey Center of NBS served as the primary responsible person of the project. In the meantime, positions of national responsible person for project implementation and responsible person for review were also established. Individuals filling these positions all had organized and implemented multiple large-scale nationwide survey projects and were highly experienced in survey implementation and quality control.

    Secondly, provincial executive units established a specialized project execution team. Members of the execution team met relevant staffing requirements and were able to ensure survey quality and submit personnel information forms to the Social Survey Center of NBS in a timely manner. Interviewers and review supervisors were from separate teams, that is, review supervisors did not work concurrently as interviewers.

    2. Making implementation plans

    First, making the master survey implementation plan. The master survey implementation plan and manual were developed to standardize work requirements in each stage, facilitate provincial implementation units to carry out their work against the standards, reduce system errors and ensure the cross-sectional comparability of provincial survey data.

    Second, provincial executive units formulated specific implementation plans covering interviewer and supervisor training plan, sampling preparation, interviewer arrangement, review work arrangements and survey progress plan, based on project requirements and actual conditions of each province.

    3. Testing quality control and other functions of computer-assisted personal interviewing system

    Based on quality control requirements, functions related to quality control of the computer-assisted interview survey system were tested, including functional and stress testing of full-process synchronous recording of survey interviews, GPS positioning and interview photo capturing.

    4. Selection and training of supervisors and interviewers

    First, supervisors and interviewers were selected. Based on project requirements, supervisor and interviewer selection requirements were established. Provincial execution units selected qualified supervisors and interviewers from implementation and interviewer teams of each province according to the requirements. After further review and screening, the final roster of supervisors and interviewers participating in the survey was confirmed and submitted to the Social Survey Center of NBS.

    Second, nationwide and provincial training was carried out. The training was conducted at two levels: Firstly, a concentrated training of responsible persons and supervisors of provincial execution units was carried with videos recorded and distributed to lower levels. Secondly, provincial execution units uniformly organized training of project team personnel and interviewers from each province to ensure managers and interviewers of all implementation units received unified training. The training was carried out in forms of lecturing and interactions with content covering basic information survey, questionnaire, sampling, progress arrangement and control, major difficulties for implementation, personnel composition and requirement, material preparation, field implementation process, PAD operation and management rules, quality control and anti-epidemic measures. All survey workers were required to understand and master survey requirements, proficiently use PADs and implement the survey according to quality control requirements.


(Ⅱ) Mid-survey control

    1. Key stage control

    Firstly, “The household selection” stage. As a survey requirement, a visit to a household should not be regarded as failed unless there have been 3 nonresponses or 2 refusals. In case of any failed visits, the interviewer must truthfully specify the causes on Registration Form of Home Visits. For communities where a specified number of valid sample interviews could still not be realized after 3 home visits, the project team would be provided the second set of visiting addresses. To prevent interviewers from visiting homes of specified addresses, the project team required interviewers to take a set of photos pertaining to an interviewed household (photos of the full name of the residents’ committee, the identifier number of the residential building/bungalow, and the household’s doorplate number), where the indicated address should be the same as the sample address. Questionnaires with photos missing or inconsistent addresses would be regarded as invalid.

    Secondly, “the respondent selection” stage. According to survey procedures, the interviewer should select the respondent from household members using the Kish grid provided on the first page of the questionnaire after entering a household. Respondent selection is crucial for ensuring sample randomness and thus should be properly implemented. After questionnaire collection, Interviewers must record the audio of the entire process of interviews they performed. The proper “respondent selection” procedure must be reflected in the audio. All questionnaires administered by interviewers that were found to be engaged in falsification were regarded as invalid. Questionnaires with the “respondent selection” procedure missing in the audio were also regarded as invalid.

    Third, the home interview stage. Interviewers were required not to miss, skip or misuse questions in the interview process. Immediate remedies must be implemented in case of the above mistakes. Prior to the completion of supplementary survey, the interviewer could not implement new interviews. In instances of evident falsification of audio records, the survey agency was required to re-perform the interviews concerning all questionnaires administered by the alleged interviewer.

    2. Accompanied interviews and guidance

    Firstly, supervisors were required to accompany interviewers (especially new interviewers). It must be ensured that each interviewer is accompanied. An experienced interviewer was accompanied in 3 interviews and a new interviewer was accompanies in at least 5 interviews.

    Secondly, interviewers were provided with guidance. In the accompanied interviews, interviewers’ operating techniques, familiarity with the questionnaire and interview process, expressive and communicative skills were observed to ensure each interviewer carry out interviews as per prescribed process. After accompanied interviews, a summary should be provided to interviewers in a timely manner and proper textual records should be kept. After an interviewer completed 2-3 questionnaires, the supervisor needed to assess his/her time and quality and provide necessary guidance. Interviewers who were still not proficient in interview operations may receive additional accompanied interview, re-training or be dismissed from the survey, depending on actual conditions.


(Ⅲ) Post-survey control

    1. Comprehensive data audit

    Firstly, institutional audit was carried out. After provincial executive units uploaded all data as required, the Social Survey Center of NBS audited the survey data according to unified standards. The data was proportionally audited by Social Survey Center of NBS, focusing on dimensions of accuracy, integrity, logic and abnormality. First, the audit of code accuracy. The codes of districts, counties and administrative divisions of communities must be filled correctly. Such information (full names of provinces, counties and counties must be indicated) should be entered according to the newest List of Administrative Division Codes for Statistical Purposes.  Second, the audit of integrity. The answer of each item must be completely provided as per specified format without omission or being misplaced in wrong rows or columns. Third, the audit of logic. Indicators involving logical relationships or skips must satisfy the underlying logic. Fourth, the audit of abnormality. Data entered should be within a reasonable range. Each item was checked to see whether there were missing values, extremes or anomalies.

    Second, the audit of the project team. On the basis of the review of institutional audit, the project team carried out secondary audit for data acceptance and made final judgment on whether each sample was valid. In that regard, the project team established a special audit group consisting of 17 auditors, including 11 graduate students and 6 undergraduate students, who were assisted by 1 teacher responsible for audit guidance and communication. The audit group developed a detailed data audit plan to specify matters like division of responsibilities for personnel, audit tasks in each stage and priorities. The audit mainly focused on authenticity, correctness, skipped or omitted questions and error or bias correction of each piece of sample data in the system. First, the audit of authenticity. The questions of whether true visits were conducted and whether the photos, times, GPS and visit characteristics were consistent were examined. Comprehensive judgment was made based on the evidence chain. Second, the audit of correctness. The questions of whether visits were correct, namely, whether the household visited was on the list of home visits, whether the Kish grid was correctly used during home visit, whether the sampling frame was correct and whether the respondent was the result of random sampling. Third, the audit of skipped or omitted questions. The skipped and omitted items were reviewed mainly through quick play of audio recordings of interviews. The skipped or omitted items were identified and recorded based on the average time of each questionnaire or module, followed by provision of feedback. Fourth, the audit of error or bias correction. Errors like erroneous entries were checked to improve data quality and accuracy and reduce erroneous entry rate. In the meantime, nonconforming visit items of interviewers were recorded in detail.

    2. Telephone follow up of respondents

    Telephone follow up was conducted by randomly selecting completed questionnaires (with telephone numbers left), and identified problems, especially falsification, were timely feedback to implementation units to ensure all problems were timely discovered and dealt with. If an interview was found to have falsified data, all related questionnaires would be invalidated and the interviewer’s qualification in the survey could be canceled depending on the severity of his/her violation. In the meantime, the implementation unit was required to supplement missing samples.


Ⅲ. Project implementation

    The project was implemented over the following 4 stages:

    (Ⅰ) December 2021- September 2022

    In this stage, Party A (the project team) and Party B (Social Survey Center of NBS) conducted multiple rounds of negotiations about specific matters such as responsibilities of each party, survey process and sampling method, and on that basis, reached the final contract after repeated revision. The communications during this stage helped deepen mutual understanding and trust, laying a good foundation for the survey work in the next stage.

    (Ⅱ) Starting and preparatory stage: October 2022 - November 2022

    The starting stage began in October 2022 after the execution of the contract and ended on November 13, 2022 when the first batch of systematic questionnaires were received. The main work in this stage was as follows: Party A provided Party B with the formal annual questionnaire and instructions on questionnaire filling, and Party B contacted relevant agencies to carry out third-stage sampling, formulated implementation plan and quality control plan according to Party A’s requirements. Using the computer-assisted personal interviewing system, online PAD questionnaire design and test were conducted. Pilot interviews were carried out in Liaoning, Shandong and Sichuan provinces, deriving 15 samples. The operation of PAD questionnaires was tested. With participation of auditors from Party A, the problems arising from the use of PAD questionnaires were promptly identified. Eventually, the PAD was set to its optimal state and PAD questionnaire test was passed. After that, the Social Survey Center of NBS contacted and organized interviewers from different provinces to conduct the survey and training. All training meetings were implemented online, with quality control personnel from Party participating in multiple training sessions.

    In the meantime, Party A organized personnel to provide training on the audit of questionnaire data. After 1 month of discussion and rehearsal since November 2022, the secondary audit team finalized the audit plan and gradually put it in trial and implementation. On the basis of opinions of the 2019 audit team, the audit plan divides audit work into four tasks based on the supplementary requirements of questionnaires this year, namely, authenticity, correctness, skipped or omitted items and error or bias correction. Further, audit standards were formulated based on audit tasks and audit record and attendance forms were developed. To ensure logical rigor of the questionnaires, the secondary quality control personnel implemented multiple rounds of questionnaire tests. To efficiently complete tasks with high quality, new personnel were recruited from colleges and universities to establish an audit team. Realistic surveys were conducted in order to familiarize them with the questionnaires. Audit training based on questionnaires of previous years was carried out to familiarize the personnel with audit process. An audit flow chart was developed in order to improve audit standards. All audit members attended, in separate batches, local training meetings with complete minutes taken, thereby enabling them to be fully prepared for the large-scale questionnaire data audit.

    (Ⅲ) Project implementation stage: November 14, 2022 - September 14, 2023

    Upon the completion of the pilot survey and preparatory stage, the Social Survey Center of NBS officially started the survey and data collection. As shown in Figure 2, survey data was collected mainly through the following 4 processes: First, after being selected, respondent answers questions on PAD questionnaires. Second, the interviewers completely and accurately recorded respondents’ answers on PADs. Third, after the survey was completed, interviewers uploaded survey data. Fourth, the uploaded data transmitted through the server was visualized on the backend of the system, available to be audited, processed and analyzed by work personnel.

    After provincial survey was officially started, basic conditions of the first 10 questionnaires submitted by each province were scrutinized. If the audit was passed, the audit team of the project group would be notified to check completed questionnaires online, who were asked to provide feedback within 1 day.

1698453499920.png

Figure 2 Collection of survey data


    With the official implementation of the survey, auditors from Party A started the synchronous review and acceptance process for data of cases approved in the initial system audit process. The outbreak of COVID-19 in December 2022 disrupted the original plan and arrangements of the survey. Accordingly, the audit work plan and arrangements were also adjusted. Generally, the secondary audit was divided into the following three stages: December 2022-February 19, 2023, the audit was conducted at home during winter vacation, with 2,462 questionnaires audited; February 19, 2023 - July 2023, online audit stage with 7,658 questionnaires audited; July 2023 - August 2023, online audit from home during summer vacation, with 8,370 questionnaires audited as of July 25, 2023.

    (Ⅳ) Data submission and acceptance stage: September 2023 - October 2023

    Data to be submitted was prepared as per contract requirements and the first submission in the form of external hard drive was implemented on September 14, 2023. With respect to submitted data, the audit team completed primary audit on September 19, 2023, finding 4 problems: First, missing of interviewer code. Second, photos of 274 cases needed to be supplemented. Third, recordings of 73 cases needed to be supplemented. Fourth, the sample data contained a total of 11 labeling errors and 1 missed label information. On October 6, 2023, the Social Survey Center of NBS submitted supplementary data. By October 12, 2023, the audit team completed the secondary audit and organization of data. As of now, the implementation of 2022-2023 China Working Conditions Survey was basically completed.


Ⅳ. Data audit

    To ensure the high quality of survey data, a re-audit system was implemented. The primary audit was completed by local supervisors, followed by spot check implemented proportionally by Social Survey Center of NBS. The secondary (final) audit was completed by the audit team of the project group. From November 13, 2022 to July 25, 2023, the audit team of the project group received a total of 8,370 samples from Party B, 728 of which were audited by Social Survey Center of NBS, accounting for 8.7%. After the re-audit, 8,155 samples were found valid and 215 invalid. The validity rate of all samples was 97.43%. In valid household samples, 3,663 contained complete home-visit photos. Among the valid household samples, there were still 1,248 samples involving skipped or omitted items. The majority of these skipped or omitted items were concentrated on minor questions in the matrix items, with little impact on the overall quality of the questionnaires. Statistics showed that 6,907 questionnaires had no entry errors, accounting for 84.70%, and 1,248 questionnaires had entry errors, accounting for 15.30%.

    Invalid household samples were sorted out and analyzed, finding that there were mainly 5 causes of such invalidity. The specific statistical sample sizes are shown in Table 2.

1698453672877.png


Ⅴ. Weight design

    Weights are crucial for ensuring the correspondence between samples and target population. Hence, we separately calculated the weights of valid samples. In social surveys, weights are mainly classified into two types: sampling weights and calibration weights. Sampling weights are the inverse of the likelihood that an individual case is sampled, which are determined by the sampling scheme. Calibration weights utilize external demographic data to calibrate the population structure of samples to reduce the bias of sample estimates. This survey data generated the variables of sampling and calibration weights. While the former is used to adjust the unequal probability in the designed multistage sampling; the latter is used to further adjust the structural weights in order to prevent various potential errors between samples and population, especially demographic errors.

(Ⅰ) Sampling design weight

    This survey used the multistage complex sampling as its basic sampling design. In the first stage, 60 urban districts (PSUs) were selected from the three layers using the PPS method; in the second stage, residents’ committees (SSUs) in these urban districts were selected using the PPS method; in the third stage, households (TSUs) in the selected residential communities were selected using the simple random sampling method without replacement; in the fourth stage, 1 respondent (USU) was randomly selected using the Kish grid at the respondent's home.

    Thus, in the PPS sampling of the first stage, the sampling probability 1698113437867.png of the 1698113418606.png-th PSU from the 1698113402951(1).png-th layer is:

1698113460090.png

    Where  1698113501605(1).png is the number of SSUs that should be sampled from each PSU, here 1698113531181(1).png; 1698113541263.png is the population of long-term residents aged 16-60 from SSU at the 1698113554556.png-th layer[1]; 1698113568894(1).png is total population of long-term residents aged 16-60 from the 1698113594581(1).png-th layer.

[1] Given the unavailability of working population data, the working population aged 16-60 derived from the Seventh National Population Census was used as an alternative. Working population across all stages of PPS sampling was replaced this way.


    In the second stage, we selected 9 SSUs from urban PSUs using the PPS method. With a PSU being selected, the sampling probability of the  k-th SSU, 1698113659154(1).png is:

1698113674325.png

    Where   1698113697727(1).png is the number of SSUs that should be sampled from each PSU, which was set as 9; 1698113713915.png is the population of long-term residents aged 16-60 from SSU.

    In the third stage, we randomly selected 15 household TSUs from community SSUs. Given an inability to obtain the true information of members within households of a community, we adopted the simple random sampling without replacement (SRSWOR) method in this stage: with SSUs selected, the probability of the  1698113758203(1).png-th household being samples, denoted by 1698113774444(1).png is equal to:

1698113869605.png

    Where  1698113841751(1).png is the number of households contacted in each community, which is set as 60;  1698113888467.pngis the total number of households of this community.

    In the fourth stage, interviewers carried out in-household sampling. With a home address being selected, the sampling probability of each individual case within a household, denoted by  1698113921840(1).png is equal to:

1698113789264(1).png

    Where  1698113966484(1).png is the number of employed individuals within a household, acquired by the item b1 “Of the persons living in your home, how many are employed or plan to be so?”.

    According to the above steps, the sampling probability of a specific respondent entering the sample is  1698114021200(1).png , that is, the product of the above four probabilities, denoted by:

1698114036212.png

    As it was the stage of city (district) stratification, we adopted a non-proportional stratification. Further, in the third sampling stage, we did not use the PPS method to sample households due to a lack of necessary information. Instead, we employed the simple sampling without replacement. Hence, the sampling probability varied from respondent to respondent[1].

[1] It should be noted that we used the population of long-term residents aged 16-60 provided by the 7th Population Census. As working population is highly correlated with, but different from, employed population, we used employment rate as a moderating factor. In other words, we assumed in the sampling stage that the employment rate across different sampling units is constant.


    On that basis, the sampling weight  1698114117459.png should be the inverse of the sampling probability [1], that is:

1698114136656.png

[1]The variable in the data is named as pweight.


(Ⅱ) Calibration weight

    In the sampling process, due to reasons like respondent refusals, vacant household or inaccessibility, a structural bias in important demographic characteristics between samples and target population might emerge. To correct such a bias, we utilized the iterative rake method to adjust the survey data across demographic variables of gender, age and education based on the 7th Census data. [1]

[1] The variable in the data is named as pweight_rake.


    1. Gender calibration

    The gender composition (as shown in Table 3) of the survey data is somewhat different from calibration structure given more female samples.

1698454952676.png

    2. Age calibration

    As shown by Table 4, the comparison between the survey and calibration data, the age structure of the survey data is relatively higher. The percentage of respondents aged 34 and below is slightly lower than that of the calibration data and the percentage of those aged between 35-46 is larger than that of the calibration data.

1698455016041.png

    3. Calibration of education level

    As shown in Table 4, compared with calibration composition, the education levels of respondents in this survey are relatively higher, with larger percentages of population holding high school, junior college and undergraduate degrees.

1698455070125.png


Ⅴ. Conclusion

    In the survey of the 2022-2023 China Working Conditions Research, we carried out complete project design and stringent implementation control. Leveraging its own authoritativeness, the Social Survey Center of NBS carefully organized the survey implementation despite the disruption from the COVID-19 pandemic. With inputs of tremendous manpower and materials, the center successfully overcame various difficulties and completed the contracted in-household survey tasks. The auditors of the project group demonstrated a high level of conscientiousness and sense of responsibility. They worked diligently, sometimes overtime, to meet the requirements of the audit plan. In general, despite the problems emerging in the initial stage of the survey such as repeated sample codes, erroneous household sampling, errors in audit operation, misunderstanding of items and invalid questionnaires, these problems were quickly solved and the survey was brought back to the right track during its middle stage.

    Home visit survey is a difficult process where information of highly alerted strange residents must be collected. The success in collecting data of 8,100 questionnaires was inseparable from the professionalism and diligence of community interviewers, the authoritativeness and organizing efforts of Social Survey Center of NBS and its workers, and the work ethics and efforts of members of the project group.  Despite certain flaws and areas of improvement in this survey, the survey data of 2022-2023 China Working Conditions Research retain a high quality and can provide a concrete foundation of social facts for the assessment and understanding of Chinese organizational working conditions and degree of prosperity.