top of page

ADVANCING TRUST
BETWEEN HUMANS AND MACHINES

We believe that everyone who is impacted by an artificial intelligence (AI) solution recommendation deserves to be treated fairly and equally. 

We were propelled into action by the stories of the disadvantaged, missed opportunities, and perpetuated biases.
 
The AI solutions are here to stay. 
The accidental flaws need not be. 

AI Trustorthiness

AI TRUSTWORTHINESS

Inability to easily determine trustworthiness of an AI solution is a monumental problem impacting all levels of our society.

We have all already witnessed a large number of instances where non-trustworthy AI solutions have led to many unintended - and sometimes deadly - consequences, such as discrimination (race, age, gender) and spread of disinformation.

As AI gets deployed at large scale, these issues, if unchecked, will only get amplified, leading to lawsuits, regulatory fines, dissatisfied customers, reputation damage, and erosion of shareholder value.

Academics, policy makers, and governments across the globe are increasingly recognizing the importance of AI Trustworthiness. It is imperative for all organizations creating, using, investing in, or regulating AI solutions to adopt and enforce AI Trustworthiness principles without delay. 

Screen Shot 2022-05-23 at 10.39.30 PM.png

Source: IBM’s multidisciplinary, multidimensional approach to trustworthy AI

State of AI Trustworthiness by the Numbers

75%

of executives view ethics as a source of competitive differentiation

40 %

of general public trusts companies to be responsible and ethical in their use of new technologies such as AI

<20%

of executives strongly agree that their organizations’ practices and actions on AI ethics match (or exceed) their stated principles and values

80%

of AI Trustworthiness champions are non-technical executives

About

ABOUT US

We help organizations advance their trust in AI solutions through a human-led, technology-enabled approach.

Our unique perspective is shaped by broad industry, consulting, and academic experience. 

TrustVector WebGraphics_edited.png
Multidisciplinary Team

Our team and the Subject Matter Expert (SME) Community consist of:

  • experienced industry experts,

  • seasoned C-suite advisors,

  • academics

    • statisticians,

    • computer scientists,

    • data scientists, and

    • ethicists,

who are active and published researchers within their domains of expertise. 

Trusted Approach

Our human-led technology-enabled approach successfully combines two decades of strategy, management and technology consulting experience focused on advancing trust between humans and machines, with over 15 years of academic research across relevant AI trustworthiness domains. 

Robust Methodology

Our methodology and AI Trustworthiness framework are the result of an in-depth analysis of the most current and reliable AI trustworthiness sources across the Globe, further  enhanced and approved by our SME Community members.

Servies

SERVICES

We enable organizations that create or seek to use trustworthy AI do so with confidence.

AI Solution Trustworthiness Assessment

An in depth solution assessment covering ethical and technical aspects of design, data, algorithms, models, processes, procedures, and ModelOps:

  • Assess AI solution's risk and organizational  maturity

  • Proactively identify issues that lead to unintended consequences

  • Develop issue mitigation strategy

  • Obtain AI solution trustworthiness certification

 AI Trustworthiness Strategy Design / Re-design

Strengthen your organization's AI strategy by incorporating AI trustworthiness principles and best practices

Fractional Chief AI Trustworthiness Officer

Assisting organizations in establishing and operationalizing a Chief AI Trustworthiness Officer role, leveraging TrustVector’s multidisciplinary community of AI Trustworthiness Subject Matter Experts across relevant domains:

  • Ensure accountability for AI-related decisions and actions

  • Infuse ethical AI practices throughout AI solution lifecycle

  • Protect against unintended consequences of misbehaving AI

OUTCOMES

Guided by our mission and leveraging our multidisciplinary human-led technology-enabled approach for determining AI trustworthiness, we help:

Outcomes
AI Creators

1. Gain and Maintain Customer's Trust

Prove to your customers, investors, and regulators that the AI solutions you are creating are trustworthy

2. Increase Market Differentiation

Increase your differentiation in the market by demonstrating commitment to trustworthy AI across your solutions

3. Increase Product Revenue

​Speed up the sales cycle and increase your AI product revenue

 

4. Minimize Risk

Stay on top of AI regulations, and minimize unintended consequences and regulatory risk​

AI Buyers & Users

1. Identify and Remedy Issues

Proactively identify and remedy AI trustworthiness issues with the AI solution you are purchasing and using

 

2. Increase Solution Adoption

Rapidly increase solution adoption through a repeatable AI trustworthiness early issue detection process during the entire AI solution lifecycle

3. Attain Faster ROI

Attain desired ROI faster through early and continuous AI trustworthiness verification, combined with robust AI / ML Ops and data governance processes 

4. Minimize Risk

Minimize risks associated with unplanned outcomes when making critical decisions

Investors

1. Uncover AI Trustworthiness Flaws Early

Uncover hidden trustworthiness flaws with the AI solutions you are investing in

2. Lower Investment Portfolio Risk

Lower your investment risk by uncovering AI Trustworthiness flaws early and often during and after the funding rounds 

3. Minimize Litigation Risk

Reduce risk of expensive litigation through a continuous AI Trustworthiness monitoring process

Regulators

1. Enhance Current Capabilities

Enhance your capabilities to regulate products that rely on AI recommendations

2. Advance Approval Process

Advance reliability of your approval process for AI enabled products

3. Reduce Risk to Consumers

Reduce risk of unforeseen negative impacts for the AI solution users (consumers)

LEADERSHIP

Leadership

Meet our leaders

Co-founder and CEO
Aleksandar Jevtic
Co-founder & CEO
  • Grey LinkedIn Icon

With over 20 years of leadership experience across health industries, Aleksandar has dedicated his entire career to advancing trust between humans and machines. He is a Responsible AI industry leader, IEEE certified AI Ethics Assessor, and a co-founder and CEO of TrustVector, a Chicago-based company that helps organizations create and adopt trustworthy artificial intelligence (AI) solutions. Under his leadership and vision, a collective of experienced ethics, legal, and technical experts are united in the missions to ensure that everyone who is impacted by an AI recommendation is treated fairly and equally, and that the risks and benefits of automated decision making are transparent and accountable.

 

TrustVector team enables organizations that seek to create and use trustworthy AI solutions do so with confidence by providing independent third party AI solution trustworthiness / risk assessments, assisting with development and implementation of trustworthy AI governance practices, and designing / redesigning AI strategy. Leveraging their multidisciplinary human-led technology-enabled approach, TrustVector helps AI technology creators prove to their customers, investors, and regulators that the AI solutions they are creating are truly reliable, further differentiating themselves from the competition by demonstrating commitment to AI trustworthiness across their solutions, increasing their AI product revenue, and minimizing unintended consequences and regulatory risk.

Aleksandar is a senior executive advisor with over 20 years of leadership experience in the setup, expansion, and management of professional services organizations and teams to successfully develop and take to market technology based services and solutions across US, Canada and Europe.

Co-founder and CSO
Sonja Petrović
Co-founder & CSO
  • Grey LinkedIn Icon

As a co-founder and Chief Scientific Officer (CSO), Sonja is responsible for evaluating and setting TrustVector's scientific priorities, envisioning and leading development of scientific capabilities, and building academic partnerships. Sonja is also an Associate Professor in the Department of Applied Mathematics, College of Computing, at Illinois Institute of Technology. She discovers, studies, and models structure in relational data we encounter in our daily lives. Sonja’s academic research portfolio spans theoretical and applied problems including development of statistical models for discrete relational data, such as social networks, and use of machine learning to predict and improve the behavior of algebraic computations. Sonja is passionate about social responsibility, inclusion, community engagement, and empowering others. She leads the Socially Responsible Modeling, Computation, and Design (SoReMo) initiative, empowering students to enact the positive societal change they are passionate about within Illinois Tech, Chicago, and beyond.

Contact

CONTACT INFO

If you believe what we believe, let's talk!

TrustVector LLC
222 W. Merchandise Mart Plaza, Suite 1212
Chicago, IL 60654

info@trustvector.ai

Follow us on LinkedIn: 

  • LinkedIn

CONTACT FORM

Thanks for contacting us! We'll respond shortly.

Your privacy is important to us. Learn more here.
bottom of page