How We Should Invest in AI, Part III—Our Ethical Criteria for AI Investments
The final installment of a three-part series on Taylor Frigon Capital's approach to AI
In Part I and Part II of this series on how we should invest in AI, we focused primarily on analyzing the drawbacks and risks of AI, rather than on its opportunities and benefits. After reading these articles, you may be wondering if there is any moral justification at all for investing in AI given the seriousness of its potential risks and the ease with which people can misunderstand and misuse it. Yet, it was the purpose of these articles to argue that it is exactly this kind of apprehension, born from an accurate understanding of the nature of both AI and humanity, that must precede any decision to invest in AI. It is only through this lens that we can see clearly how AI can be used for the common benefit of our society.
The key message we wish to impart is that AI is a tool, and its morality depends on the degree to which it is used in the service of the good of the human person. Its enormous potential means that it can be used—intentionally or unintentionally—for great harm, but this makes it imperative that people with well-formed consciences, guided by truth, are involved in its development and application. Investors can play a critical role in the development and application of AI, and since we at Taylor Frigon are dedicated to generating value for both our clients and broader society with our investments, we endeavor to participate in that role in both an ethical and effective manner.
In this article, we will present our current views on how to invest in AI in a way that mitigates the ethical risks, increases the potential benefits for society, and delivers financial value for our clients as effectively as possible.
Where Are We in the AI Story?
Despite the rapid advancements in AI technology in the past few years, it is important to understand that the AI industry writ large is still in its infancy. There is significant hype surrounding AI, but much of this is because it has yet to reach its full potential. There are many theories about what AI will one day be able to accomplish, but these are still theories because they haven’t been proven yet. AI chatbots built on LLMs remain the dominant products, but these are still primarily being used to assist in mundane workflow tasks and generate flawed graphics (such as the banner image in this article).
The reality is that the industry is still in the nascent stage of building the infrastructure of AI necessary to realize its grand visions for the technology. Big Tech companies are pouring billions of dollars into building the data centers to run these models, purchasing the relatively scarce advanced chip systems themselves, and securing access to the energy needed to power these highly energy-intensive AI data centers. The hottest AI companies in the market are those that are involved in this infrastructure buildout in some way, such as NVIDIA, Microsoft, Taiwan Semiconductor, and CoreWeave.
These Big Tech companies are spending so much on this effort that there are even questions on Wall Street about the possibility that the demand is overhyped and that there won’t be enough AI applications of sufficient value to justify the capital expenditures. The industry could be facing a significant bubble akin to the dot-com bubble in 2000, which was fueled by the significant hype of the internet. It is ironic that some people are concerned about the success of AI eliminating jobs and hurting the economy, when there is a much greater likelihood that the bursting of an AI bubble will do significant damage to the economy.
In fact, the growth of the internet is an apt metaphor for how to understand the state of the AI industry today. The buildout of AI infrastructure, with its focus on chips and data centers, is akin to the buildout of the internet infrastructure in the 1990s. The dot-com bubble was the result of significant investment in internet companies that offered no real value but were hyped because they had “.com” in the name. We are already seeing the equivalent of that today with any company that claims to be an AI company and has “.AI” in the name often receiving more attention than they probably deserve. It wasn’t until several years after the dot-com bubble burst that we started to see internet companies like Facebook, Amazon, and Netflix succeed and make the internet the economic powerhouse that it is today.
The purpose of describing this metaphor is not to warn you about an AI bubble bursting. Rather, it is to suggest that AI’s development will mirror the internet’s growth in that there will be an infrastructure building stage followed by an application development stage, which has implications for how we decide to invest. Decisions for investing in AI infrastructure has less of an ethical component than investing in AI applications, just as the decision to invest in Cisco in the 1990s had less of an ethical component than investing in DraftKings today. This is because those kinds of infrastructure technologies still have the potential capacity to serve the common good and enhance human flourishing.
Good Applications of AI
Although the focus is currently on AI infrastructure building, there certainly are companies in existence today that are already working on developing AI applications—for better or worse—which is why we cannot delay in establishing the criteria needed to evaluate them. We must be prepared for the transition to that next stage in AI’s development—the application stage—which will require much more ethical scrutiny to ensure our hard-earned dollars are put in service of the common good. Below we will provide the general criteria we have established based on our understanding of the nature of AI and the human person, and then we will offer some examples of how we can use these criteria to evaluate some general applications we can likely expect from AI companies.
Here are Taylor Frigon Capital’s main ethical criteria for investing in an AI company:
The company’s product or service is primarily intended to contribute to the economic, intellectual, social, bodily, or spiritual flourishing of the person and is predominantly used for this end.
The company must make reasonable efforts to prevent misuse of its technology, particularly where potential harm exists, and ensure it does not violate the inherent dignity of the human person.
The company must implement safeguards to ensure that autonomous AI systems remain subject to human oversight and control during their operation.
The essence of these criteria is ensuring the technology’s purpose is centered on serving the good of the human person, as defined by our Judeo-Christian worldview. These criteria take into account the reality that a tool can be turned into a weapon in the service of evil ends, and the potential for accidents can always occur. However, given the potential for misuse of AI technology, there is greater responsibility for companies to take reasonable measures to mitigate these risks and limit the potential harmful effects when they occur.
To illustrate how these criteria could be applied in practice, here are some examples drawn from morally complex yet increasingly likely uses of AI.
Example 1: AI robot technology that performs a task that previously only a human could
One of the biggest fears about AI is that it will take away jobs from human beings. While we understand this concern, particularly for those who have jobs that are at high risk of being replaced by AI, this is not something that should be opposed wholesale. In discussing this issue, we often allude to the anecdote of the buggy whip producer in the early 20th century. When the automobile was invented, those that produced buggy whips for horse-drawn carriages eventually went out of business. However, new jobs were eventually created due to the invention of the automobile, which are too numerous to count now in the 21st century.
AI will likely have a similar effect in that it may eliminate, or at least dramatically affect, certain jobs and tasks. However, if the intention of the AI robot technology is to improve the well-being of people by assuming jobs or tasks that are mundane, dangerous, or physically taxing in a way that allows workers or consumers to perform other tasks that are more important or enriching, then there is little justification to condemn it. However, if the AI robot is assuming a task that should be performed by a human or at least led by a human due to some inherent reason, then this should be avoided. There are some roles that humans are uniquely more suited for based on their social or emotional characteristics that an AI simply shouldn’t replace—such as therapists, teachers or judges.
Example 2: AI medical technology that is integrated into the human body
There is significant potential for AI in the field of medical technology, but this arguably makes it one of the most morally contentious areas as well. The potential for medical applications of AI to be misused is great, but it can seem very ambiguous as often the same technology can be used for both good and bad purposes. For example, medical implants driven by AI can address bodily or neurological disorders, significantly enhancing the flourishing of the person. However, these same kinds of tech implants could easily devolve into degradation of a person if, for example, the computer exerts too much of a role in the cognitive functions of a person.
This is why the intention of the technology is extremely important. If the intention of an AI medical technology is to help a person become more authentically human due to some medical deficiency, then it is likely to be ethical. If the intent is to “upgrade” the body of a perfectly healthy individual, to try to “hack consciousness” or create some sort of superhuman cyborg, then it is likely an unethical use of the technology and should be avoided.
Example 3: AI technology in military and security applications
With the Terminator series still vivid in people’s imaginations, it is no wonder that there are significant concerns about the use of AI in military and security applications. The most common practical concern is that AI autonomous weapons use could result in the accidental killing of innocents or other mistakes in targeting that could lead to undesired outcomes. There is also a justified unease with the prospect of using AI to predict people’s behavior, which could be faulty, and using that as the basis for military or security actions.
Yet, it is no surprise that AI is considered the object of a new arms race, akin to that of the nuclear arms race of the 20th century, with world powers vying for the most dominant AI technology to gain a military edge on their potential adversaries. This investment scenario is unique from the perspective of ethical evaluation in that for most companies using AI for military applications, the intent of the product is to do harm. However, we would argue that based on a prudential assessment of the realities of the modern world as it relates to international relations, there is a compelling moral argument for investing in AI designed for responsible use in national security.
Great care must be taken to ensure that such technology is built with adequate safeguards, and that there is always a “human-in-the-loop” in terms of weapons release decision-making. For AI data analysis for law enforcement and security, investors also need to determine if such efforts pose a threat to civil liberties and the dignity of the person. Military AI companies do have an important role to play in providing for the defense of the community in the modern world, which is a fundamental necessity for preserving a common good society. Therefore, this is an area that can provide opportunities for good and ethical AI investments.
Example 4: AI technology in education
Seeing as “intelligence” is literally in the name of AI, it is undoubtedly going to be a critical tool for the forming of minds in the near future. However, for an education AI technology to be ethical it must remain just that—a tool. We must keep in mind that the true education of the individual does not just consist in the intellect, but also the character and virtue of the person. We must also remember that having access to information does not necessarily make people smarter. AI education companies will each claim to provide access to the great deposit of human wisdom and knowledge, but that is not enough to make the technology effective as an educator.
AI education companies must be able to articulate how they developed their AI education agent: what data was it trained on, what moral code or philosophy guided it, how it was coded to meet the needs of the student, etc. The intent cannot be to replace teachers, or to even give the student full control over the AI tutor. Teachers must take the lead in using AI as a tool for education, and they must be able to trust that whatever information is provided the student is accurate, useful, and ethical. These considerations are central to any decisions for investing in AI education technology.
A New Era
We could continue going on with more examples of moral dilemmas in investing in AI and how we would approach it, but we hope that this will suffice for now in explaining how Taylor Frigon Capital approaches investing in AI from an ethical perspective. Of course, we also have our unique ideas and perspectives on investing in AI from the purely practical financial perspective. We will certainly publish these perspectives as well, particularly in the Taylor Frigon Research Department’s Investment Research publication, which is focused on providing fundamental equities research according to our narrative-based investing approach. We recently posted Part I of our Semiconductor and Data Center Narrative in that publication, which directly discusses how we consider investing in AI from a practical view.
However, to finish up our thoughts on the ethical side of investing in AI, the bottom line is that while we must be aware of the significant risks of AI as investors, we also cannot let this paralyze us with fear. At the end of the day, whether AI is used for good or evil will depend on us. We need more people who have a firm understanding of the nature of this technology and of the nature of humanity, which is why we wanted to publish this series. Some of you may not agree with certain aspects of our positions, but we hope that you will at least understand them and understand that our priority is to act ethically in our investment practices according to our deeply held beliefs about the world and our place in it.
We have entered a new era with the advent of AI technology. We must think through the implications of this new era with seriousness and intellectual rigor. We hope this article series has shown you how we are approaching this and will prompt future discussions about how best to approach investing in AI in general.
We would love to hear your thoughts. Reach out to us anytime if you are interested in talking. In the meantime, we hope you enjoy life and live it to the fullest—without undue worry about what tomorrow might bring. God bless!



