An international survey of employees, managers, and citizens found that Asian countries, such as Japan and Korea, exhibit the lowest levels of openness to AI innovations, whereas Western countries, such as the US and Australia, exhibit moderate levels of trust in AI. A notable exception to this pattern is China, an Asian country that has the highest level of trust in and openness to AI among all major economies. However, there is virtually no management research explaining these large cross-cultural differences in trust in AI. To efficiently identify cultural values underlying trust in AI, I trained a deep learning model to predict country-level trust in AI using 554 values from 153,828 individuals. The model had high predictive validity (r = 0.88 between actual and predicted trust in AI scores). According to a feature importance analysis, respect for authority (a key component of the cultural value power distance) is the top predictor of trust in AI. Four correlational and experimental studies confirmed that power distance is associated with greater trust and support for AI. This research illustrates that an abductive, data-driven approach complements the traditional deductive and inductive methods of hypothesis generation that are common in the field. More broadly, this work supports a stream of research in which I develop Machine Learning-Based Quantitative Grounded Theory (MLQGT), a framework for understanding how machine learning and theory development can coexist in the field of management.