Chat GPT is coming to town: The use of AI in retail
While Santa Clause is warming up his sleigh and getting ready for his annual trip around the world later this month, it seems the elves have been developing new uses for Artificial Intelligence (AI) technology to help with the rapid selection and deployment of gifts.
AI is rapidly transforming the retail landscape, offering new ways for retailers to engage customers, streamline operations, and drive growth. As the technology matures, its applications are becoming more sophisticated, moving beyond simple automation to deliver highly personalised experiences, operational efficiencies, and entirely new business models.
While this new use of technology is applauded, with it comes a legal minefield of new legislation and requirements for business, especially multinationals.
New uses of AI in retail: Ready for the festive season
AI’s integration into retail is no longer a novelty, it is a strategic imperative. Recent developments highlight how leading retailers are embedding AI across various facets of their businesses.
Customer service and engagement
Retailers such as Lowe’s have made significant strides by deploying AI-powered virtual assistants. Lowe’s Mylow and Mylow Companion handle nearly a million customer enquiries each month, ranging from product information to order updates. These virtual agents not only improve response times but also free up human staff for more complex tasks, enhancing overall productivity and customer satisfaction.
Similarly, Amazon has enhanced its Rufus AI shopping assistant, which now leverages conversational context and real-time recommendations. Rufus can reorder previously purchased items, suggest alternatives if products are unavailable, and allow users to manage the information the assistant retains about them. This level of personalisation and convenience is setting new standards for customer interaction.
Personalised shopping and recommendations
Online fashion retailer ASOS has introduced AI-powered stylists through its “Styled for You” feature. This tool analyses a shopper’s purchase history and preferences to recommend curated outfit combinations, aiming to boost engagement and counteract sales declines. J. Crew has also invested in AI-driven personalisation, offering tailored recommendations on its mobile app and website, and equipping sales associates with AI tools to enhance in-store interactions.
Operational efficiency and project planning
Home Depot’s Blueprint Takeoffs is an example of AI streamlining complex processes. This tool generates comprehensive material lists and cost estimates from project plans, reducing preparation time from weeks to days. Such solutions are invaluable for professional renovators and builders, enabling more efficient project planning and direct purchasing.
AI-driven sales and inventory management
Crew’s use of AI extends to backend operations, where predictive models optimise inventory in response to real-time demand signals. AI chatbots provide employees with actionable sales data, supporting better decision-making and resource allocation.
New shopping channels and experiences
Target has launched a beta app on OpenAI’s ChatGPT, allowing customers to shop directly within the chatbot platform. This integration enables account linking, multi-item purchases, and access to fresh food, positioning Target as a pioneer in conversational commerce.
Automated Decision Making (ADM)
Notably the above all utilises Automated decision-making (ADM). ADM has moved to the centre of legislative attention worldwide. Eight key jurisdictions, for Australian business, being Australia, the United Kingdom (UK), the European Union (EU), the United States (US), Canada, China, Singapore and India, now impose, or are poised to impose within the next two years, specific obligations on organisations that use algorithms to make or support decisions with legal or similarly significant effects.
The information below sets out a high-level summary of the main legislative provisions and their practical effect for each of the key jurisdictions.
Australia
Key ADM Provision(s) (quoted):
- Organisations must take reasonable steps to implement practices, procedures and systems that ensure compliance… including providing mechanisms to correct decisions made on the basis of personal information.
- Right to request human review of automated decisions that have legal or similarly significant effects.
Practical effect for corporate ADM:
- Mandates transparency of algorithmic criteria and gives individuals the right to seek correction or review of ADM outcomes that rely on personal data.
- Will impose explicit human-review rights, record-keeping on algorithms, and mandatory algorithmic impact assessments (AIAs).
Penalties for breach:
Civil penalties up to AUD 50 million or 30 % of adjusted turnover for serious or repeated breaches.
United Kingdom
Key ADM Provision(s) (quoted):
- Data subjects… have the right not to be subject to a decision based solely on automated processing… which produces legal effects.
- No solely automated decision with legal/similar effect unless authorised”, introduce risk assessments for “significant decisions”, and expand right to human intervention.
Practical effect for corporate ADM:
- Requires meaningful information about logic, right to human review, and restrictions on solely automated decisions in hiring, credit, insurance and similar contexts.
- Refines ADM compliance tests, relaxes some record-keeping but hard-codes outcome-specific rights; demands clearer governance of AI models.
Penalties for breach:
Fines up to the higher of £17.5 million or 4 % of global annual turnover.
European Union
Key ADM Provision(s) (quoted):
- The data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects…
- High-risk AI systems shall comply with the requirements set out in Chapter 2 before being placed on the market or put into service.
Practical effect for corporate ADM:
- Requires lawful basis plus explicit consent or contractual necessity; mandates safeguards such as human review and explanation; applies extraterritorially to EU residents’ data.
- Creates tiered risk regime; high-risk ADM (credit, employment, biometrics, essential services) must undergo conformity assessments, risk management, data-quality checks and human oversight.
Penalties for breach:
Fines up to €35 million or 7 % global turnover for prohibited practices; €15 million/3 % for other breaches.
United States (Colorado)
Key ADM Provision(s) (quoted):
- A developer or deployer of a high-risk artificial intelligence system shall use reasonable care to avoid algorithmic discrimination.
Practical effect for corporate ADM:
- Requires impact assessments, notice to consumers, and opt-out for consequential ADM; applies extraterritorially to entities serving Colorado residents.
Penalties for breach:
Civil penalties up to USD20,000 per violation; enforced by state Attorney-General.
United States (California)
Key ADM Provision(s) (quoted):
- Governing a business’s use of automated decision-making technology, including logic and transparency.
Practical effect for corporate ADM:
- Will oblige businesses to provide pre-use notices, allow opt-out of significant profiling, and run risk assessments once CPPA regulations take effect (expected 2026).
Penalties for breach:
Civil penalties up to USD2,500 per violation or USD7,500 for intentional violations or minors’ data.
Canada
Key ADM Provision(s) (quoted):
- A person responsible for a high-impact system must identify, assess and mitigate risks of harm or biased output
- Organisations should be prepared to explain decisions made about individuals based on automated processing of personal information.
Practical effect for corporate ADM:
- Introduces mandatory risk governance, record-keeping and compliance attestations for “high-impact” AI deployed in Canada, with extraterritorial reach.
- Requires meaningful explanation on request; triggers accountability obligations for algorithmic profiling in commercial activities.
Penalties for breach:
Penalties up to the greater of CAD 25 million or 5 % of global turnover; criminal fines of the same magnitude for egregious breaches.
China
Key ADM Provision(s) (quoted):
- Personal information processors using automated decision-making shall ensure transparency, fairness and impartiality of decision-making and shall not impose unreasonable differential treatment.
Practical effect for corporate ADM:
- Demands algorithmic transparency, right to refuse personalised recommendations, and mandatory opt-out for marketing profiling.
Penalties for breach:
Fines up to RMB 50 million or 5 % of previous year turnover; personal liability for officers; business suspension.
Singapore
Key ADM Provision(s) (quoted):
- An organisation shall ensure personal data is accurate and complete if it is likely to be used to make a decision that affects the individual.
Practical effect for corporate ADM:
- Imposes duty to ensure data quality and provide access/correction when ADM used; PDPC Advisory Guidelines on AI add expected accountability measures.
Penalties for breach:
Financial penalties up to the higher of SGD 1 million or 10 % of Singapore annual turnover.
India
Key ADM Provision(s) (quoted):
- Data principal has the right to obtain an explanation of decisions taken solely on automated processing which has a significant impact.
Practical effect for corporate ADM:
- Creates explanation and grievance rights, mandates audits for significant data fiduciaries, and allows Data Protection Board to order cessation of unlawful ADM.
Penalties for breach:
Penalties up to INR 250 crore per contravention; enhanced penalties for children’s data or repeated breaches.
It’s evident from the provisions in place that there are core trends each country is looking to address when it comes to legislating ADM.
Human oversight
A striking commonality across the surveyed jurisdictions is the emergence of a baseline right for individuals not to be subjected to decisions made solely by algorithms without meaningful human oversight. For multinationals, this means that any system making final determinations about credit, employment, pricing, or access to services, must be capable of escalation to a human reviewer and of furnishing intelligible explanations on demand.
High-risk systems
The second theme of the legislation implemented by the key jurisdictions is the migration of ADM regulation from data-protection statutes into purpose-built AI or digital-services frameworks. While not intended at this stage for legislation in Australia, due to its reluctance to implement specific AI legislation, the Federal Government’s AI guidance documents released in December 2025 recommend that entities utilise a risk-tiered model, voluntarily elevating obligations for “high-risk” systems. In contrast, The European Union (through the Artificial Intelligence Act (EU AI Act), USA (via Colorado’s Artificial intelligence Act (CAI Act)), and Canada (through the Artificial Intelligence and Data Act) all legislate this risk-tiered model. These legislative instruments require documented risk assessments, robust testing for bias, continuous monitoring and mandatory incident reporting. They therefore extend compliance beyond traditional privacy teams into enterprise risk, product development, and audit functions.
Transparency
Several laws now mandate transparency. New York City already compels annual bias audits for automated hiring tools, and the CAI Act requires public availability of risk summaries. The EU AI Act mandates publication of detailed system descriptions for high-risk models, while others seek to impose transparency obligations on very large online platforms. In short, it is highly likely that public facing algorithmic policies, model cards, and plain language notices, will become indispensable artefacts in corporate disclosure suites.
Documentation and governance artefacts
Across the key jurisdictions, the following five clusters of internal documentation will require prompt review by business (or their legal advisors):
- privacy policies must explain the existence, logic and impacts of ADM;
- algorithmic or AI governance frameworks must embed risk classification, testing protocols and human-in-the-loop safeguards;
- data protection impact assessment templates need to incorporate ADM-specific questions;
- vendor and cloud contracts must impose parallel obligations on providers; and
- policy suites addressing non-discrimination, information security and incident response will need cross-referencing to the new ADM controls to ensure coherence.
Future uses of AI in retail
AI-powered retail is rapidly becoming the norm. As generative and agent-based systems give rise to autonomous shopping agents, Australian retailers, and those trading overseas, must act now to align their internal processes, governance documents, and external policies with new legislative changes.
Taking action now will enable business to adopt and utilise emerging technologies due to become common place in 2026:
Key takeaways for retailers
- Embrace AI as a strategic priority
Retailers must view AI not as a bolt-on technology but as a core component of their business strategy. Those who fail to adapt, risk falling behind as competitors leverage AI to enhance customer experiences and operational efficiency.
- Focus on data readiness and integration
Effective AI relies on high-quality, integrated data. Retailers should invest in data infrastructure and ensure that data from all channels, including online, in-store, and third-party platforms, is accessible and actionable (and preferably owned by them).
- Prioritise personalisation and customer experience
AI’s greatest value lies in its ability to deliver personalised experiences at scale. Retailers should use AI to understand customer preferences, anticipate needs, and tailor interactions across all touchpoints.
- Prepare for agentic commerce
The rise of autonomous shopping agents will fundamentally change how customers interact with retailers. Retailers should ensure their platforms are accessible to AI agents and optimise their content and product data for machine consumption.
- Invest in staff training and change management
AI will change the nature of many retail roles. Retailers should invest in training and support to help staff adapt to new tools and processes, ensuring a smooth transition and maximising the benefits of AI.
- Address legal and ethical considerations
When deploying AI, retailers must be mindful of legal and ethical issues, including:
- Data privacy and security:
Ensure compliance with data protection regulations (such as GDPR or the Australian Privacy Act), obtain proper consent for data collection, and implement robust security measures. - Transparency and accountability:
Be clear with customers about how AI is used, especially in decision-making processes that affect them. - Bias and fairness:
Regularly audit AI systems for bias and take steps to ensure fair treatment of all customers. - Intellectual Property:
Be aware of IP issues when using generative AI for product design or content creation.
Contact our lawyers working in retail and technology industries
AI is reshaping the retail industry, offering new opportunities for growth, efficiency, and customer engagement. By embracing AI strategically, investing in data and staff, and addressing legal and ethical considerations, retailers can position themselves for success in an increasingly automated and personalised marketplace. Those who act now will be best placed to thrive as AI continues to evolve and redefine the future of retail.
If you’d like assistance with implementing AI in your retail business, please contact Mark Metzeling or a member of our MK Technology team.
The information contained in this article is general in nature and cannot be relied on as legal advice nor does it create an engagement. Please contact one of our lawyers listed above for advice about your specific situation.
more
insights
SMEs & mandatory sustainability reporting 101: The basics
Australian new merger regulation: The regime, recent amendments and announcements
Macpherson Kelley telco client SPN Co vindicated in Federal Court following unsubstantiated allegations from competitor, Pennytel Australia
stay up to date with our news & insights
Chat GPT is coming to town: The use of AI in retail
While Santa Clause is warming up his sleigh and getting ready for his annual trip around the world later this month, it seems the elves have been developing new uses for Artificial Intelligence (AI) technology to help with the rapid selection and deployment of gifts.
AI is rapidly transforming the retail landscape, offering new ways for retailers to engage customers, streamline operations, and drive growth. As the technology matures, its applications are becoming more sophisticated, moving beyond simple automation to deliver highly personalised experiences, operational efficiencies, and entirely new business models.
While this new use of technology is applauded, with it comes a legal minefield of new legislation and requirements for business, especially multinationals.
New uses of AI in retail: Ready for the festive season
AI’s integration into retail is no longer a novelty, it is a strategic imperative. Recent developments highlight how leading retailers are embedding AI across various facets of their businesses.
Customer service and engagement
Retailers such as Lowe’s have made significant strides by deploying AI-powered virtual assistants. Lowe’s Mylow and Mylow Companion handle nearly a million customer enquiries each month, ranging from product information to order updates. These virtual agents not only improve response times but also free up human staff for more complex tasks, enhancing overall productivity and customer satisfaction.
Similarly, Amazon has enhanced its Rufus AI shopping assistant, which now leverages conversational context and real-time recommendations. Rufus can reorder previously purchased items, suggest alternatives if products are unavailable, and allow users to manage the information the assistant retains about them. This level of personalisation and convenience is setting new standards for customer interaction.
Personalised shopping and recommendations
Online fashion retailer ASOS has introduced AI-powered stylists through its “Styled for You” feature. This tool analyses a shopper’s purchase history and preferences to recommend curated outfit combinations, aiming to boost engagement and counteract sales declines. J. Crew has also invested in AI-driven personalisation, offering tailored recommendations on its mobile app and website, and equipping sales associates with AI tools to enhance in-store interactions.
Operational efficiency and project planning
Home Depot’s Blueprint Takeoffs is an example of AI streamlining complex processes. This tool generates comprehensive material lists and cost estimates from project plans, reducing preparation time from weeks to days. Such solutions are invaluable for professional renovators and builders, enabling more efficient project planning and direct purchasing.
AI-driven sales and inventory management
Crew’s use of AI extends to backend operations, where predictive models optimise inventory in response to real-time demand signals. AI chatbots provide employees with actionable sales data, supporting better decision-making and resource allocation.
New shopping channels and experiences
Target has launched a beta app on OpenAI’s ChatGPT, allowing customers to shop directly within the chatbot platform. This integration enables account linking, multi-item purchases, and access to fresh food, positioning Target as a pioneer in conversational commerce.
Automated Decision Making (ADM)
Notably the above all utilises Automated decision-making (ADM). ADM has moved to the centre of legislative attention worldwide. Eight key jurisdictions, for Australian business, being Australia, the United Kingdom (UK), the European Union (EU), the United States (US), Canada, China, Singapore and India, now impose, or are poised to impose within the next two years, specific obligations on organisations that use algorithms to make or support decisions with legal or similarly significant effects.
The information below sets out a high-level summary of the main legislative provisions and their practical effect for each of the key jurisdictions.
Australia
Key ADM Provision(s) (quoted):
- Organisations must take reasonable steps to implement practices, procedures and systems that ensure compliance… including providing mechanisms to correct decisions made on the basis of personal information.
- Right to request human review of automated decisions that have legal or similarly significant effects.
Practical effect for corporate ADM:
- Mandates transparency of algorithmic criteria and gives individuals the right to seek correction or review of ADM outcomes that rely on personal data.
- Will impose explicit human-review rights, record-keeping on algorithms, and mandatory algorithmic impact assessments (AIAs).
Penalties for breach:
Civil penalties up to AUD 50 million or 30 % of adjusted turnover for serious or repeated breaches.
United Kingdom
Key ADM Provision(s) (quoted):
- Data subjects… have the right not to be subject to a decision based solely on automated processing… which produces legal effects.
- No solely automated decision with legal/similar effect unless authorised”, introduce risk assessments for “significant decisions”, and expand right to human intervention.
Practical effect for corporate ADM:
- Requires meaningful information about logic, right to human review, and restrictions on solely automated decisions in hiring, credit, insurance and similar contexts.
- Refines ADM compliance tests, relaxes some record-keeping but hard-codes outcome-specific rights; demands clearer governance of AI models.
Penalties for breach:
Fines up to the higher of £17.5 million or 4 % of global annual turnover.
European Union
Key ADM Provision(s) (quoted):
- The data subject has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects…
- High-risk AI systems shall comply with the requirements set out in Chapter 2 before being placed on the market or put into service.
Practical effect for corporate ADM:
- Requires lawful basis plus explicit consent or contractual necessity; mandates safeguards such as human review and explanation; applies extraterritorially to EU residents’ data.
- Creates tiered risk regime; high-risk ADM (credit, employment, biometrics, essential services) must undergo conformity assessments, risk management, data-quality checks and human oversight.
Penalties for breach:
Fines up to €35 million or 7 % global turnover for prohibited practices; €15 million/3 % for other breaches.
United States (Colorado)
Key ADM Provision(s) (quoted):
- A developer or deployer of a high-risk artificial intelligence system shall use reasonable care to avoid algorithmic discrimination.
Practical effect for corporate ADM:
- Requires impact assessments, notice to consumers, and opt-out for consequential ADM; applies extraterritorially to entities serving Colorado residents.
Penalties for breach:
Civil penalties up to USD20,000 per violation; enforced by state Attorney-General.
United States (California)
Key ADM Provision(s) (quoted):
- Governing a business’s use of automated decision-making technology, including logic and transparency.
Practical effect for corporate ADM:
- Will oblige businesses to provide pre-use notices, allow opt-out of significant profiling, and run risk assessments once CPPA regulations take effect (expected 2026).
Penalties for breach:
Civil penalties up to USD2,500 per violation or USD7,500 for intentional violations or minors’ data.
Canada
Key ADM Provision(s) (quoted):
- A person responsible for a high-impact system must identify, assess and mitigate risks of harm or biased output
- Organisations should be prepared to explain decisions made about individuals based on automated processing of personal information.
Practical effect for corporate ADM:
- Introduces mandatory risk governance, record-keeping and compliance attestations for “high-impact” AI deployed in Canada, with extraterritorial reach.
- Requires meaningful explanation on request; triggers accountability obligations for algorithmic profiling in commercial activities.
Penalties for breach:
Penalties up to the greater of CAD 25 million or 5 % of global turnover; criminal fines of the same magnitude for egregious breaches.
China
Key ADM Provision(s) (quoted):
- Personal information processors using automated decision-making shall ensure transparency, fairness and impartiality of decision-making and shall not impose unreasonable differential treatment.
Practical effect for corporate ADM:
- Demands algorithmic transparency, right to refuse personalised recommendations, and mandatory opt-out for marketing profiling.
Penalties for breach:
Fines up to RMB 50 million or 5 % of previous year turnover; personal liability for officers; business suspension.
Singapore
Key ADM Provision(s) (quoted):
- An organisation shall ensure personal data is accurate and complete if it is likely to be used to make a decision that affects the individual.
Practical effect for corporate ADM:
- Imposes duty to ensure data quality and provide access/correction when ADM used; PDPC Advisory Guidelines on AI add expected accountability measures.
Penalties for breach:
Financial penalties up to the higher of SGD 1 million or 10 % of Singapore annual turnover.
India
Key ADM Provision(s) (quoted):
- Data principal has the right to obtain an explanation of decisions taken solely on automated processing which has a significant impact.
Practical effect for corporate ADM:
- Creates explanation and grievance rights, mandates audits for significant data fiduciaries, and allows Data Protection Board to order cessation of unlawful ADM.
Penalties for breach:
Penalties up to INR 250 crore per contravention; enhanced penalties for children’s data or repeated breaches.
It’s evident from the provisions in place that there are core trends each country is looking to address when it comes to legislating ADM.
Human oversight
A striking commonality across the surveyed jurisdictions is the emergence of a baseline right for individuals not to be subjected to decisions made solely by algorithms without meaningful human oversight. For multinationals, this means that any system making final determinations about credit, employment, pricing, or access to services, must be capable of escalation to a human reviewer and of furnishing intelligible explanations on demand.
High-risk systems
The second theme of the legislation implemented by the key jurisdictions is the migration of ADM regulation from data-protection statutes into purpose-built AI or digital-services frameworks. While not intended at this stage for legislation in Australia, due to its reluctance to implement specific AI legislation, the Federal Government’s AI guidance documents released in December 2025 recommend that entities utilise a risk-tiered model, voluntarily elevating obligations for “high-risk” systems. In contrast, The European Union (through the Artificial Intelligence Act (EU AI Act), USA (via Colorado’s Artificial intelligence Act (CAI Act)), and Canada (through the Artificial Intelligence and Data Act) all legislate this risk-tiered model. These legislative instruments require documented risk assessments, robust testing for bias, continuous monitoring and mandatory incident reporting. They therefore extend compliance beyond traditional privacy teams into enterprise risk, product development, and audit functions.
Transparency
Several laws now mandate transparency. New York City already compels annual bias audits for automated hiring tools, and the CAI Act requires public availability of risk summaries. The EU AI Act mandates publication of detailed system descriptions for high-risk models, while others seek to impose transparency obligations on very large online platforms. In short, it is highly likely that public facing algorithmic policies, model cards, and plain language notices, will become indispensable artefacts in corporate disclosure suites.
Documentation and governance artefacts
Across the key jurisdictions, the following five clusters of internal documentation will require prompt review by business (or their legal advisors):
- privacy policies must explain the existence, logic and impacts of ADM;
- algorithmic or AI governance frameworks must embed risk classification, testing protocols and human-in-the-loop safeguards;
- data protection impact assessment templates need to incorporate ADM-specific questions;
- vendor and cloud contracts must impose parallel obligations on providers; and
- policy suites addressing non-discrimination, information security and incident response will need cross-referencing to the new ADM controls to ensure coherence.
Future uses of AI in retail
AI-powered retail is rapidly becoming the norm. As generative and agent-based systems give rise to autonomous shopping agents, Australian retailers, and those trading overseas, must act now to align their internal processes, governance documents, and external policies with new legislative changes.
Taking action now will enable business to adopt and utilise emerging technologies due to become common place in 2026:
Key takeaways for retailers
- Embrace AI as a strategic priority
Retailers must view AI not as a bolt-on technology but as a core component of their business strategy. Those who fail to adapt, risk falling behind as competitors leverage AI to enhance customer experiences and operational efficiency.
- Focus on data readiness and integration
Effective AI relies on high-quality, integrated data. Retailers should invest in data infrastructure and ensure that data from all channels, including online, in-store, and third-party platforms, is accessible and actionable (and preferably owned by them).
- Prioritise personalisation and customer experience
AI’s greatest value lies in its ability to deliver personalised experiences at scale. Retailers should use AI to understand customer preferences, anticipate needs, and tailor interactions across all touchpoints.
- Prepare for agentic commerce
The rise of autonomous shopping agents will fundamentally change how customers interact with retailers. Retailers should ensure their platforms are accessible to AI agents and optimise their content and product data for machine consumption.
- Invest in staff training and change management
AI will change the nature of many retail roles. Retailers should invest in training and support to help staff adapt to new tools and processes, ensuring a smooth transition and maximising the benefits of AI.
- Address legal and ethical considerations
When deploying AI, retailers must be mindful of legal and ethical issues, including:
- Data privacy and security:
Ensure compliance with data protection regulations (such as GDPR or the Australian Privacy Act), obtain proper consent for data collection, and implement robust security measures. - Transparency and accountability:
Be clear with customers about how AI is used, especially in decision-making processes that affect them. - Bias and fairness:
Regularly audit AI systems for bias and take steps to ensure fair treatment of all customers. - Intellectual Property:
Be aware of IP issues when using generative AI for product design or content creation.
Contact our lawyers working in retail and technology industries
AI is reshaping the retail industry, offering new opportunities for growth, efficiency, and customer engagement. By embracing AI strategically, investing in data and staff, and addressing legal and ethical considerations, retailers can position themselves for success in an increasingly automated and personalised marketplace. Those who act now will be best placed to thrive as AI continues to evolve and redefine the future of retail.
If you’d like assistance with implementing AI in your retail business, please contact Mark Metzeling or a member of our MK Technology team.