· Insights  · 6 min read

ESG for Tech: Social

Explore the social aspects of ESG for tech companies, including data privacy, algorithmic bias, and organizational culture. Learn how board members can guide their organizations towards responsible social practices while maximizing business value.
tl;dr

The social aspect of ESG for tech companies encompasses data privacy, algorithmic bias, and organizational culture. Board members should focus on implementing strong data privacy and security programs, ensuring ethical use of algorithms, and fostering a positive workplace culture. These efforts not only improve ESG metrics but often lead to better business outcomes through reduced risks, improved brand value, and increased employee retention.

In my discussion of ESG broadly and environmental impact specifically, I have repeatedly encountered the idea that what’s good for an organization’s profit is often good for its ESG metrics as well. For many companies, this is especially true of the S in ESG, which typically refers to social impact as well as, for some investors, human capital concerns.

Rhe rapid pace of technological advancement, particularly in AI and data science, has amplified boards’ responsibilities in overseeing ethical practices, data privacy, and organizational culture (though I’ll admit that many of these also span governance). This post explores key social considerations with the hope that by understanding these issues and their potential impact on companies’ long-term success, boards can better fulfill their fiduciary duties and drive responsible innovation.

Social Impact and Human Capital

Social and human capital are historically related to the traditional concepts of brand value, goodwill, and labor relations. Today, investors, employees, and customers often ask more generally how an organization’s decisions affect both its internal stakeholders and broader society.

Social impact can manifest in various ways:

  1. Direct impact through products or services
  2. Interactions with employees, contractors, and supply chain partners
  3. Relationships with communities where the business operates

Organizations that maintain good relations with their personnel and community partners often suffer less conflict, reducing legal expenses, risk of regulatory fines, and employee turnover. This directly impacts the bottom line and should be a key consideration in board-level discussions.

Technology and Professional Services: Unique Social Challenges

For technology and professional services organizations, social and human capital impacts often come from the use or misuse of data and AI. AI ethics has been front and center of many discussions - and rightfully so! It comes as no surprise that AI plays a huge part in the social impact of many tech companies today. Given the outsized impact that AI has on our world, it should be given an accordingly substantial focus by companies’ boards, particularly in the following areas:

1. Data Privacy and Security

Almost all “cloud” services or software-as-a-service (SaaS) businesses receive and store information about individuals and organizations. When this information is “lost” or inappropriately accessed or sold, the individuals and organizations might be harmed.

A notable example of AI ethics in the area of data privacy is respecting customers’ data. While it’s appealing to use customer data to train a model that could create significant enterprise value for your organization, this should be done in a way that is not only legal, but also respectful of your customers’ expectations about their data. As with other technical strategies, the board should be kept apprised to ensure that the approach aligns with the risk management and governance of the organization as a whole.

In order to manage the wide array of statutory and contractual obligations related to data privacy, organizations need to invest in building strong, technology-enabled data privacy and information security programs. The Responsible Data Science Policy Framework that I developed can help companies ensure that their data science activities are conducted responsibly from a technical, legal, and ethical perspective. Protecting against harm requires a strong commitment to data privacy and information security, and organizations that chronically underinvest in these areas tend to suffer as a result of data breaches or brand reputation damage. It’s the responsibility of the board to ensure that sufficient resources are given to establishing proper governance of data.

2. Algorithmic Bias

Some organizations encounter social impact considerations when they systematize decisions via algorithms, such as in credit underwriting or recruitment processes.

While much recent attention has been focused on possible bias in machine learning data sets and models, such concerns date back long before modern data science. The Equal Credit Opportunity Act (ECOA), enacted in 1974, and its related disparate impact analyses provided a framework for managing social impact long before random forests or transformer models were introduced. Companies developing algorithms should ensure that data privacy and ethical review are incorporated into their research process as part of a holistic data science maturity assessment. By developing such a mature data science program, companies can validate their commitment to minimizing social harm while supporting their strategic vision.

3. Organizational Culture

One of the most common issues faced by technology and professional services companies relates to employee turnover. Like a factory constantly changing parts and plans, organizations that are churning through personnel are likely to suffer from inefficiency, loss of institutional knowledge, and financial setbacks. Developing an attractive culture – as well as compensating competitively – can help organizations score well on human capital metrics and increase the productivity of their teams.

Board-Level Considerations

Boards can drive social responsibility initiatives that not only improve the company’s ESG metrics but also lead to better business outcomes. Here are some key actions to consider:

  1. Advocate for robust data privacy and security programs, ensuring they are adequately resourced and regularly audited.
  2. Push for the implementation of ethical AI frameworks and regular audits of algorithmic decision-making processes.
  3. Encourage the development of a strong, positive organizational culture that attracts and retains top talent.
  4. Ensure that social impact considerations are integrated into the company’s overall risk management strategy.
  5. Push for regular reporting on social initiatives and their impact on the company’s brand value, employee retention, and customer satisfaction.
  6. Consider forming a board-level ethics committee or assigning specific social responsibility oversight to an existing committee.

By taking a proactive approach to these social considerations, your company can not only improve its ESG metrics but also reduce risks, enhance its brand value, and create a more productive and stable workforce. As social considerations become increasingly important to investors, customers, and employees, your leadership in this area will position the company for long-term success.

In my next and final post in this series, I’ll explore the ‘G’ in ESG, discussing the governance aspects that are crucial for tech companies in today’s complex regulatory and ethical landscape.

Related Posts

View All Posts »

Explore the critical environmental aspects of ESG for tech companies, including establishing metrics, data strategies, and software quality control. Learn how board members can guide their organizations towards more sustainable practices while navigating regulatory uncertainties.