Large Language Models (LLMs) exhibit surprisingly diverse risk preferences when acting as AI decision makers, a crucial characteristic whose origins remain poorly understood despite their expanding economic roles. We analyze 50 LLMs using behavioral tasks, finding stable but diverse risk profiles. Alignment tuning for harmlessness, helpfulness, and honesty significantly increases risk aversion, with comparative difference analysis confirming that a ten percent increase in ethics reduces risk appetite by two to eight percent. This induced caution persists across prompts and affects economic forecasts. While alignment enhances safety, it may also suppress valuable risk taking, revealing a tradeoff that risks suboptimal economic outcomes. As AI models become more powerful and influential in economic decisions, while alignment grows increasingly critical, our empirical framework serves as an adaptable and enduring benchmark to track risk preferences and monitor this tension between ethical alignment and economically valuable risk taking.