Open Access Research Article

THE REGULATORY CHALLENGES AND OPPORTUNITIES ASSOCIATED WITH THE USE OF ARTIFICIAL INTELLIGENCE (AI) AND ROBO-ADVISORS IN INVESTMENT MANAGEMENT.

Author(s):
RASHMI A
Journal IJLRA
ISSN 2582-6433
Published 2024/04/17
Access Open Access
Issue 7

Published Paper

PDF Preview

Article Details

THE REGULATORY CHALLENGES AND OPPORTUNITIES ASSOCIATED WITH THE USE OF ARTIFICIAL INTELLIGENCE (AI) AND ROBO-ADVISORS IN INVESTMENT MANAGEMENT.
 
AUTHORED BY - RASHMI A
 
 
ABSTRACT:
Artificial intelligence (AI) in investment management through robo-advisors opens up a world of possibilities and might lead to a more equitable and possibly more efficient financial environment. Robo-advisors are excellent at customizing investment plans to meet the demands of each client. These systems are able to generate personalized portfolios that correspond with the individual conditions of each client by taking into account variables such as risk tolerance, financial objectives, and investment deadlines. This degree of customization can greatly increase the likelihood of reaching specific financial goals.
The emergence of robo-advisors and the integration of artificial intelligence (AI) are bringing about a substantial revolution in the investment management environment. These automated platforms use artificial intelligence (AI) algorithms to evaluate enormous volumes of data, build investment portfolios, and offer investment recommendations based on the requirements of specific investors.[1] The broadening of access to investing services and the possibility for returns are two benefits of this technology breakthrough, but it also brings with it a distinct set of regulatory concerns that require careful consideration by legislators and business leaders.
 
Key words: Robo-advisors, Investor Protection, Algorithmic Bias, Sandbox.
 
INTRODUCTION
Artificial Intelligence (AI) is about to bring about a massive revolution in the financial services sector. Robo-advisors are AI-powered automated systems with the potential to improve returns on investments and democratize access to investment management. But along with this tremendous potential are a number of difficult issues that need to be carefully considered in terms of regulations. This briefing examines the regulatory environment that surrounds artificial intelligence (AI) and robo-advisors in the field of investment management, emphasizing the benefits and problems that may arise.
In traditional investment management, clients' risk tolerance and financial objectives are taken into account by human advisers who create personalized portfolios. But robo-advisors provide a more automatic method. They examine a tonne of data, including as market movements, past performance, and customer risk profiles, using AI algorithms.[2] They make investing strategy recommendations and automatically execute trades within pre-established parameters based on this research. This strategy has a number of benefits, such as:
1.      Accessibility: Compared to conventional advisers, robo-advisors often have lower minimum investment levels, which makes investment management more affordable for a wider spectrum of investors.
2.      Decreased Cost: Investing using automated systems results in cheaper costs for investors as compared to working with human advisers.
3.      Objectivity: Artificial intelligence systems are less vulnerable to emotional biases that influence human judgment.
4.      Convenience: Robo-advisors simplify the investing process for customers by providing automated portfolio management and 24/7 access.
 
Regulatory Challenges:
Although AI-powered robo-advisors present promising opportunities, there are important regulatory issues that need to be resolved.[3]
1.      Investor Protection: It's critical to make sure investors are treated fairly and morally. Potential issues with algorithmic bias, data security lapses, and opaqueness in AI-driven investing choices must be addressed by regulations.
2.      Fiduciary Duty: Advisors in conventional investment management are obligated to operate in their customers' best interests. This raises the question of who has the fiduciary obligation when using robo-advisors. Which team is in charge—the human supervision team, the AI algorithms, or the platform developer? Legal definitions must be precise.[4]
3.      Risk management: AI systems are taught on previous data, which could not accurately predict the state of the market in the future. Regulations must take into account the possible systemic risks—like algorithmic herding or flash collapses brought on by coordinated trading behavior—that come with the broad usage of AI.
4.      Regulatory Uncertainty: Regulators have a dilemma as a result of the quick development of AI technology. Current laws might not be adequate to control the intricacies of investing platforms driven by artificial intelligence. Frameworks must be flexible and sensitive to changes in technology.[5]
 
Opportunities Provided by regulations: In spite of their drawbacks, rules can occasionally offer advantages.
1.      Empowerment and Education of Investors: Regulations can support efforts aimed at educating investors about the advantages and disadvantages of robo-advisors. Giving investors the tools they need to make wise selections is essential.
2.      Encouraging Innovation: While protecting investor interests, a well-balanced regulatory framework may foster responsible innovation and the development of AI-powered investing solutions. AI algorithms have the ability to improve market research, risk management, and overall performance. This includes market efficiency and financial inclusion. By lowering barriers to investment management, regulations can help advance financial inclusion and enhance market efficiency through the use of AI.
Cybersecurity requirements: To guarantee data security and stop illegal access or manipulation of AI algorithms, regulations can set strong cybersecurity requirements for robo-advisor platforms.[6]
 
Technology developers, financial institutions, and regulators must work together to integrate AI into investment management. Through the establishment of a legal framework that strikes a balance between investor protection and innovation, we can fully harness the power of artificial intelligence to revolutionize the investing landscape, therefore improving accessibility, efficiency, and potential returns for all stakeholders.
 
Additional investigation may go more deeply into certain areas of concern, such as possible biases in algorithms and ways to mitigate them. A strong framework that promotes responsible development in this dynamic industry may be developed by analyzing global regulatory trends and best practices.
 
RESEARCH OBJECTIVES
In the field of investment management, the rise of artificial intelligence (AI) and robo-advisors offers a dynamic environment full of potential and difficulties. Strong regulations are necessary to guarantee the ethical and efficient integration of new technologies. This research aims to provide a comprehensive understanding of the complex interplay between AI, robo-advisors, and the regulatory landscape in investment management.[7] By addressing these research objectives, we can contribute to the development of a regulatory framework that fosters responsible innovation and promotes a secure and efficient investment environment for all.
1)      Understanding the Influence of Robo-advisors Backed by AI:
Investor Behavior and results: Examine the effects that robo-advisor platforms and AI-driven investment decision-making have on investor behavior, risk tolerance, and overall investment results.
2)      Market dynamics an efficiency: Examine how artificial intelligence (AI) could affect market dynamics, such as liquidity, efficiency, and the possibility of systemic concerns.
3)      Balancing between Innovation and Risk Management: Effective regulatory measures should be identified to protect investors from the possible negative effects of artificial intelligence (AI) and robo-advisors, including algorithmic bias, data security breaches, and a lack of transparency.
4)      Risk Assessment and Mitigation techniques: To maintain market integrity and financial stability, develop techniques to evaluate and reduce any risks related to AI in investment management.
 
RESEARCH QUESTIONS
        i.            How can laws mitigate algorithmic bias and manipulation risk while ensuring fairness and transparency in AI-driven investment decisions?
      ii.            How can regulations clearly define who bears fiduciary duty for investment outcomes with robo-advisors?
    iii.            How can regulations empower investors to make informed decisions about using these platforms?
  1. What role can regulatory sandboxes play in facilitating the development and testing of innovative robo-advisor technologies in a controlled environment?
 
RESEARCH METHODOLOGY
For this current research work, the doctrinal method of research has been followed. In order to conduct this research, the reference of all the Primary sources like Acts, statutes and bare acts have been made and the reference of the secondary sources like websites, Legal databases like Manupatra has been utilised for the purpose of this research.
 
ANALYSIS
How can laws mitigate algorithmic bias and manipulation risk while ensuring fairness and transparency in AI-driven investment decisions?
The use of artificial intelligence (AI) to investment management is bringing intriguing new opportunities for return enhancement and democratization of access. These benefits are accompanied with serious difficulties, though, including the possibility of algorithmic bias, manipulation, and a lack of transparency.[8] Legal frameworks need to change and develop in order to handle these issues and guarantee just and responsible AI-driven investment decisions.
One critical area for legal intervention is mitigating algorithmic bias.[9] AI algorithms are trained on historical data, which can inadvertently perpetuate existing biases present in the data itself. This can lead to discriminatory outcomes, where investment strategies favor certain demographics or asset classes over others. Legal frameworks can address this by:
 
Ø  Legislation requiring Algorithmic Explainability: Investors may be entitled to get concise justifications from robo-advisor platforms regarding the reasoning and logic underlying AI-generated recommendations. This enables investors to recognize possible biases and comprehend the elements driving their investing decisions[10].
Ø  Encouraging Data Diversity: Policies have the power to motivate the usage of a variety of datasets for AI algorithm training. This makes it less likely that past biases will be reinforced and guarantees that the models take a wider variety of investment opportunities into account.
Ø  Human Intervention and Oversight: Laws may require a human intervention role in robo-advisor platforms. AI suggestions may be reviewed by human advisers, who can also step in when they suspect prejudice or injustice.
 
Another key concern is manipulation risk. Malicious actors could potentially exploit vulnerabilities in AI algorithms to manipulate investment decisions for personal gain. To mitigate this, legal frameworks can:
Ø  Cybersecurity Requirements: Robo-advisor platforms should be subject to strong cybersecurity regulations that guarantee data integrity and prohibit unauthorized access or manipulation of AI algorithms.
Ø  Auditing and Stress Testing: To find any flaws and vulnerabilities that might be used for manipulation, regular audits and stress testing of AI models can be required.
Deterring malevolent actors and ensuring responsibility are two benefits of establishing explicit legal liability for manipulating AI algorithms in investment choices.
 
Building trust and maintaining investor confidence in AI-driven investments require transparency. Transparency can be facilitated by legal frameworks via[11]:
Ø  Regulations may require robo-advisors to provide precise and clear statements on the risks and constraints of investing strategies guided by artificial intelligence. Investors ought to be aware of the part AI plays, any biases, and degree of human control required.
Ø  Standardized Reporting Metrics: Robo-advisor systems can be equipped with standardized reporting metrics. This makes it possible for investors to evaluate performance and decide which platform best fits their requirements and risk tolerance.
Ø  Initiatives for Investor Education[12]: Initiatives aimed at educating investors about the advantages and disadvantages of investing using artificial intelligence (AI) might be supported by legal frameworks. Investors have to be prepared to use these platforms with knowledge of how artificial intelligence operates.
 
Therefore, we can establish a regulatory framework that encourages responsible innovation in AI-driven investment management by putting these legislative proposals into practice in tandem. This would protect investor interests and encourage just and moral decision-making while reducing the dangers of algorithmic bias, manipulation, and lack of transparency. As a result, everyone will be able to benefit from AI's promise as a formidable tool for increasing access to investment options and maybe improving investment outcomes.[13]
 
How can regulations clearly define who bears fiduciary duty for investment outcomes with robo-advisors?
Legal frameworks face a new issue with the rise of robo-advisors, AI-powered services that provide automated financial management.[14] Even while robo-advisors have a great deal of promise to democratize access to investing, a crucial question remains: who has a fiduciary obligation for the results of investments made via them? Fiduciary responsibility in traditional investment management frameworks with human advisers are well-established. But with robo-advisors, it becomes more difficult to distinguish between human oversight, algorithms, and platform developers, therefore a precise legal definition of accountability is required.
 
The Subject of Fiduciary Duty:
Investment managers have always been required under the idea of fiduciary obligation to behave in their customers' best interests, putting their financial security first. This obligation consists of:

DUTY OF CARE: Advisors have a duty of care to manage client investments with care and attention, while complying to applicable investing standards and laws.
DUTY OF LOYALTY: Advisors have a duty of loyalty to put their clients' interests ahead of their own or those of other parties, and they must steer clear of conflicts of interest.
DUTY OF DISCLOSURE: Advisors have an obligation to fully and fairly disclose to their clients all relevant information pertaining to investments and the risks involved.
The Robo-advisor Dilemma
The advisor-client relationship has changed with the advent of robo-advisors. Below is a summary of the principal actors involved:
Platform Developer for Robo-advisors: The person or group responsible for developing and managing the AI algorithms and the platform's general infrastructure.
AI Algorithms: Automated models for making investment decisions based on data analysis and suggestion generation.
 
The team in charge of keeping an eye on the platform, guaranteeing its operation, and maybe stepping in when necessary is known as the Human Oversight Team.
Investor: The person who uses robo-advisor platforms to make at least some of their investing decisions.[15]
 
Determining which of these entities, or which mix of them, should have fiduciary responsibility for investment results, is the difficult part. Here are a few possible strategies:

Strict Liability of the Platform Developer: Under this strategy, the platform developer has full responsibility for fulfilling fiduciary duties. They would be responsible for any investment losses brought on by platform faults, biased data, or algorithmic mistakes. This strategy encourages developers to provide careful consideration to data quality, strong algorithm design, and responsible platform operation. On the other hand, it can discourage innovation as developers might grow unduly risk cautious.[16]
 
Shared Fiduciary Duty approach: This approach divides accountability between a team of human overseers and the platform developer. The developer would be in charge of making sure the platform complies with legal requirements and operates as intended. The staff responsible for human oversight would keep an eye on the platform's operations, take action if there were any possible problems, and perhaps even provide clients extra investing advice. While this strategy encourages shared accountability, it necessitates a clear division of roles inside the platform.

Algorithmic Fiduciary obligation: According to this recently developed idea, artificial intelligence algorithms may possess a fiduciary obligation of their own. This would include instructing the algorithms to give the client's financial security top priority when making decisions. The viability of this strategy from a technological and legal standpoint is still up for discussion.
 
Aspects to Take into Account for Efficient Regulation:
Risk-Based Approach: The intricacy of the robo-advisor platform and the degree of risk involved in its investing methods have to be taken into account by regulations. Regulations governing simpler platforms with pre-defined portfolios can be less onerous than those governing more intricate, customized investing strategies.[17]

Disclosure and Transparency: It's critical to provide clear and straightforward information about the use of AI in investing choices as well as the type of human oversight that occurs on the platform. Investors need to know who is responsible for fulfilling fiduciary duties and what options they have when their investments lose money.
 
Regulatory Flexibility: In order to keep up with the quick developments in AI technology, the legal system must be flexible. Regulations must be reviewed and updated often to guarantee their continued applicability and effectiveness.
 
Therefore, determining fiduciary obligation in the context of robo-advisors is a difficult undertaking that necessitates giving considerable thought to all of the relevant parties.[18] We can create a future where AI-powered investing tools positively impact financial inclusion and responsible wealth creation for all by investigating various models and enacting legislation that put investor protection, risk management, and transparency first. In this quickly changing investment environment, where innovation and investor protection must coexist, the legal framework must constantly change as technology advances.
 
 
How can regulations empower investors to make informed decisions about using these platforms?
Artificial intelligence (AI)-driven automated investing platforms, or robo-advisors, are transforming the way that people may access investment possibilities. But this increased accessibility also presents a significant challenge: enabling investors to use these platforms with knowledge and discretion. To ensure that investors are aware of the benefits and hazards of AI-driven investment management, regulations may be a crucial tool in levelling the playing field. Investment choices were traditionally advised by live advisors who offered tailored suggestions and justifications. On the other hand, because robo-advisors rely on algorithms, there is an information imbalance between investors and the platform. This raises questions regarding the degree of human control required, possible biases in the algorithms, and investors' comprehension of the investing techniques. Regulations can concentrate on encouraging investor education and transparency within robo-advisor platforms in order to rectify this mismatch.[19]
1.      Investor Education Initiatives:
Ø  Standardized Disclosures: Regulations can mandate clear and standardized disclosures from robo-advisor platforms. These disclosures should explain the platform's investment philosophy, the role of AI in decision-making, potential biases in the algorithms, and the level of human oversight involved.[20]
Ø  Investor Education Campaigns: Regulatory bodies can promote financial literacy initiatives targeted at educating investors about AI-powered investment management. This could involve campaigns explaining how robo-advisors work, the risks and benefits associated with them, and factors to consider when selecting a platform.
Ø  Interactive educational Tools: Within robo-advisor systems, regulations may promote the creation of interactive instructional resources. With the use of these technologies, investors may be able to see possible outcomes depending on different risk profiles by using individualized simulations of alternative investing methods.[21]
 
2.      Providing Transparency to Empower:
Performance Reporting: Standardized reporting metrics for robo-advisor platforms might be established by regulations. This makes it simple for investors to evaluate the efficacy of various AI algorithms, compare platform performance, and decide which platform best suits their financial objectives.
Ø  Algorithmic Explainability: Although it may be difficult to fully disclose all of the workings of intricate AI algorithms, laws can support attempts in this direction. This entails giving investors a fundamental comprehension of the elements affecting the platform's suggestions, even in cases when the algorithm's finer points remain unknown.[22]
Ø  Independent Ratings and Reviews: The creation of independent robo-advisor review systems may be aided by regulations. These platforms, which are constructed using reliable approaches, can give investors objective evaluations of various robo-advisor products, enabling them to make well-informed decisions.
 
3.      Adapting Rules to Investor Requirements:
Ø  Risk-Based Approach: When deciding how much disclosure and openness are required, regulations should take the investor's degree of financial literacy and risk tolerance into account. A less thorough strategy can be enough for individuals with greater risk tolerance and investing expertise.[23]
Ø  Know Your Investor (KYI) Principles: Robo-advisor platforms may be encouraged by regulations to have strong KYI principles. This guarantees that the platform gathers sufficient data on the investor's financial objectives, level of risk tolerance, and prior investing expertise. The platform may customize its suggestions and make sure they fit the investor's particular situation based on this information.
 
4.      Empowerment via Control and Choice:
Ø  Investing Flexibility: Policies may incentivize robo-advisor platforms to provide a range of automated investing solutions. This gives investors the option to select platforms that offer AI-driven recommendations along with human adviser oversight, or entirely automated investment management.
Ø  Simple Opt-out Procedures: Investors who want to discontinue using their robo-advisor platform or change their investing approach at any time should have access to simple and uncomplicated opt-out procedures that are guaranteed by regulations. As a result, investors feel more in control and are able to modify their investing strategy as necessary.[24]
 
Thus, by combining these legislative actions, we may provide a setting where investors have access to the information and resources they need to decide whether or not to use robo-advisors. Increased openness, better investor education, and customized laws will enable people to successfully navigate the AI-driven financial landscape and maybe realize the full potential of this cutting-edge technology. Ultimately, building confidence and guaranteeing the appropriate expansion of robo-advisors within the larger financial environment depend heavily on an informed and empowered investor base.
 
What role can regulatory sandboxes play in facilitating the development and testing of innovative robo-advisor technologies in a controlled environment?
The advent of artificial intelligence-driven systems known as robo-advisors, which provide automated investment management, poses a distinct obstacle for regulatory bodies. These platforms are innovative and require a delicate balance between protecting investors and promoting innovation, even while they have enormous potential to democratize investment access and possibly increase profits. Before they are widely adopted by the market, regulatory sandboxes provide a viable answer by offering a controlled environment for testing and developing cutting-edge robo-advisor technology.[25]
1.      The benefit of having a regulatory sandbox:
It gives FinTech companies a dedicated area inside a regulatory framework where they may test new financial products and services with less stringent regulations. The following are some benefits of this regulated environment for the advancement of robo-advisor technology:
                    i.            Shorter Time to Market: Conventional regulatory procedures may be drawn out and difficult. Sandboxes let robo-advisor platforms test and improve more quickly, which enables them to get to market sooner and take advantage of new trends in investing.
                  ii.            Protecting Customers: Sandboxes' regulated environment makes sure that customer safety always comes first. Before allowing for widespread use, regulators can keep a careful eye on and evaluate any possible hazards related to investing choices powered by artificial intelligence.
                iii.            Testing New Technologies: Robo-advisor sandboxes offer a secure environment for testing cutting-edge algorithms and features. This enables developers to test out various methods for portfolio development, client engagement tactics, and risk evaluation. [26]
                iv.            Encouraging Innovation: Sandboxes promote cooperation between FinTech companies and regulators. The system may be improved and future compliance with changing standards can be ensured by using regulatory input obtained during testing.
                  v.            Finding Regulatory Gaps: Regulators can see places where current laws might not be the best fit to control investments driven by artificial intelligence by keeping an eye on how the sandbox operates. Future legislative frameworks that provide a balance between consumer protection and innovation can be informed by this.
 
2.      Developing effective Regulatory Sandboxes
3.      Several crucial components are necessary for regulatory sandboxes to be really effective in promoting the development of robo-advisors:
        i.            Eligibility and Scope Should Be Clearly Defined: The sandbox should have a scope that specifies the kinds of robo-advisor technologies that are acceptable for testing. This guarantees that innovation complies with legal requirements. [27]
      ii.            Simplified Application Process: To minimize bureaucratic obstacles for potential robo-advisor developers, the sandbox application process should be straightforward and fast.
    iii.            Data Security and Privacy policies: To protect customer data and foster confidence in the testing process, strong data security and privacy policies need to be set up inside the sandbox. [28]
    iv.            Unambiguous Metrics and Reporting Needs: It is important to identify the metrics used to assess the effectiveness and risk profile of robo-advisor technology. Furthermore, unambiguous reporting guidelines provide openness and enable authorities to evaluate the effectiveness of the testing procedure.[29]
      v.            Exit Strategy: Following successful testing in the sandbox setting, a robo-advisor platform must have a well-defined exit strategy that outlines the next stages for full market launch.
Consequently, regulatory sandboxes have a great deal of potential to promote creativity and hasten the creation of ethical robo-advisor technology.[30] We can set the stage for a future in which AI-powered investing tools significantly contribute to a more equitable, effective, and perhaps profitable investment environment for all by establishing a secure place for experimentation, testing, and collaboration between regulators and FinTech companies.
 
FINDINGS:
Several important conclusions are drawn from the investigation of the regulatory potential and problems related to AI and robo-advisors in investment management.
Robo-advisors with AI capabilities: they have a lot of potential advantages. lower costs in comparison to conventional advisers, improved accessibility to investment management, and the possibility of improved returns through data-driven research.

Regulatory obstacles need to be carefully considered: First and foremost, legislation must safeguard investors from possible bias in algorithms, data security threats, opaque decision-making, and unclear definitions of fiduciary obligation in this new technology environment.
 
There is regulatory ambiguity: as the laws that are in place might not be sufficient to handle the intricacies of investing platforms driven by artificial intelligence. Frameworks must be flexible and sensitive to the quick changes in technology.

Several opportunities for beneficial regulatory involvement: Regulations can increase market efficiency through better analysis, stimulate responsible innovation, educate investors, and advance financial inclusion by increasing accessibility to investment management.
 
CONCLUSION
AI integration in investment management brings intriguing new possibilities as well as obstacles. A collaborative approach between financial institutions, regulators, and technology innovators may be fostered to build a regulatory framework that strikes a compromise between investor protection and innovation. By doing this, artificial intelligence will be able to reach its full potential and completely change the investing industry, improving accessibility, efficiency, and maybe even profitability for all parties involved.
 
AI integration in investment management offers a strong chance to increase efficiency, democratize access, and maybe increase returns. Navigating the regulatory environment is still a difficult task, though. A cooperative strategy combining regulators, financial institutions, and technology developers is necessary to fully realize the promise of AI-powered robo-advisors. Regulations that address issues with algorithmic bias, data security, and transparency should put investor protection first. It is imperative that fiduciary obligation be defined legally with regard to AI algorithms and human monitoring teams. Strong cybersecurity requirements must also be established, and investor education programs must be supported.
 
An appropriate choice are regulatory sandboxes, which offer a safe haven for experimenting with cutting-edge robo-advisor technology while protecting users. We can make sure that AI is a potent instrument for improving the investing landscape, promoting financial inclusion, and empowering investors of all levels by creating an atmosphere that fosters responsible innovation with crucial protections.

AI in investment management has a bright future ahead of it. If we can successfully navigate the regulatory obstacles, we can usher in a new era of financial empowerment and expansion. It will take time to adjust and improve the regulatory structure as technology advances. But if we put innovation and investor safety first, we can create the conditions for a more successful and inclusive financial future for all.
 
SUGGESTIONS:
1.      Provide Explicit Regulatory Frameworks: Provide policies that are transparent, flexible, and address algorithmic bias, data security, fiduciary obligation, and AI-driven investing platforms.
2.      Encourage Sandboxes for Regulation: Establish and employ regulatory sandboxes to enable the controlled testing and development of novel robo-advisor technology.
3.      Prioritize Investor Education: Take steps to educate investors on the advantages and disadvantages of employing robo-advisors so they may make well-informed decisions.

4.      Promote Collaboration: To guarantee the appropriate development and use of AI in investment management, promote cooperation amongst financial institutions, regulators, and technology developers.
5.      Encourage Responsible AI Development: Give justice, accountability, and transparency a priority as you encourage the creation and implementation of moral frameworks and guidelines for AI algorithms that are used to make investment choices.
Monitor and Adjust: Keep a close eye on how artificial intelligence is developing in the field of investment management, and adjust rules as necessary to take advantage of new possibilities and difficulties.


[1] Belanche, D., Casaló, L.V. and Flavián, C., 2019. Artificial Intelligence in FinTech: understanding robo-advisors adoption among customers. Industrial Management & Data Systems119(7), pp.1411-1430.
[2] Phoon, K.F. and Koh, C.C.F., 2018. Robo-advisors and wealth management. Journal of Alternative Investments20(3), p.79.
[3] Sabharwal, C.L., 2018. The rise of machine learning and robo-advisors in banking. IDRBT Journal of Banking Technology28.
[4] Lee, K.Y., Kwon, H.Y. and Lim, J.I., 2018. Legal consideration on the use of artificial intelligence technology and self-regulation in financial sector: focused on robo-advisors. In Information Security Applications: 18th International Conference, WISA 2017, Jeju Island, Korea, August 24-26, 2017, Revised Selected Papers 18 (pp. 323-335). Springer International Publishing.
[5] Nain, I. and Rajan, S., 2023. Algorithms for better decision-making: a qualitative study exploring the landscape of robo-advisors in India. Managerial Finance49(11), pp.1750-1761.
[6] Mrkývka, P. and Šiková, Z., 2023. Robo-advisory as a driving force of the financial market in the light of digitalization?. WSB Journal of Business and Finance57(1), pp.66-77.
[7] Ashrafi, D.M., 2023. Managing Consumers’ Adoption of Artificial Intelligence-Based Financial Robo-Advisory Services: A Moderated Mediation Model. Journal of Indonesian Economy and Business38(3), pp.270-301.
[8] Sifat, I., 2023. Artificial Intelligence (AI) and Future Retail Investment.
[9] Mensah, G.B., 2023. Artificial Intelligence and Ethics: A Comprehensive Review of Bias Mitigation, Transparency, and Accountability in AI Systems.
[10] Kumar, D. and Suthar, N., 2024. Ethical and legal challenges of AI in marketing: an exploration of solutions. Journal of Information, Communication and Ethics in Society.
[11] Rane, N., Choudhary, S. and Rane, J., 2024. Artificial Intelligence-Driven Corporate Finance: Enhancing Efficiency and Decision-Making Through Machine Learning, Natural Language Processing, and Robotic Process Automation in Corporate Governance and Sustainability. Natural Language Processing, and Robotic Process Automation in Corporate Governance and Sustainability (February 8, 2024).
[12] Kanaparthi, V., 2024. AI-based Personalization and Trust in Digital Finance. arXiv preprint arXiv:2401.15700.
[13] Rane, N., Choudhary, S. and Rane, J., 2023. Blockchain and Artificial Intelligence (AI) integration for revolutionizing security and transparency in finance. Available at SSRN 4644253.
[14] Paterson, J.M., Miller, T. and Lyons, H., 2023. Demystifying Consumer-Facing Fintech: Accountability for Automated Advice Tools.
[15] Rühr, A., 2020. Robo-advisor configuration: an investigation of user preferences and the performance-control dilemma.
[16] Darskuviene, V. and Lisauskiene, N., 2021. Linking the robo-advisors phenomenon and behavioural biases in investment management: An interdisciplinary literature review and research agenda. Organizations and Markets in Emerging Economies12(2), pp.459-477.
[17] Capponi, A., Olafsson, S. and Zariphopoulou, T., 2022. Personalized robo-advising: Enhancing investment through client interaction. Management Science68(4), pp.2485-2512.
[18] David, D. and Sade, O., 2019. Robo-advisor adoption, willingness to pay, and, trust: an experimental investigation. SSRN. https://papers. ssrn. com/sol3/papers. cfm.
[19] Gardner, T.A., Benzie, M., Börner, J., Dawkins, E., Fick, S., Garrett, R., Godar, J., Grimard, A., Lake, S., Larsen, R.K. and Mardas, N., 2019. Transparency and sustainability in global commodity supply chains. World Development121, pp.163-177.
[20] Tan, G.K.S., 2020. Robo-advisors and the financialization of lay investors. Geoforum117, pp.46-60.
[21] Yuspin, W. and Iksan, M., 2024. Risk aversion in the future of financial advisory: The legal protections for Robo-Advisor users in mutual fund investments. J. Account. Fin. Audit. Stud10(1), pp.28-36.
[22] Jung, D., Dorner, V., Glaser, F. and Morana, S., 2018. Robo-advisory: digitalization and automation of financial advisory. Business & Information Systems Engineering60, pp.81-86.
[23] Sironi, P., 2016. FinTech innovation: from robo-advisors to goal based investing and gamification. John Wiley & Sons.
[24] Baker, T. and Dellaert, B., 2019. the Regulatory Strategy for Robo-Advice. The disruptive impact of FinTech on retirement systems, p.149.
[25] Miglionico, A., 2023. Regulating Innovation through Digital Platforms: The Sandbox Tool. European Company and Financial Law Review19(5), pp.828-853.
[26] Daldaban, I.I., 2019. RegTech and SupTech for Robo-Advisers: Alternative Regulatory Methods for Enhancing Compliance. Asper Rev. Int'l Bus. & Trade L.19, p.59.
[27] Karadogan, B.B., 2019. Regulating Financial Technology–Opportunities and Risks (Doctoral dissertation, University of Essex).
[28] Allen, H.J., 2019. Regulatory sandboxes. Geo. Wash. L. Rev.87, p.579.
[29] Baker, T. and Dellaert, B., 2017. Regulating robo advice across the financial services industry. Iowa L. Rev.103, p.713.
[30] Chia, Y., 2021. ROBO FINANCIAL ADVICE: THE NEW FRONTIER. In Financial Advice and Investor Protection (pp. 19-37). Edward Elgar Publishing.

About Journal

International Journal for Legal Research and Analysis

  • Abbreviation IJLRA
  • ISSN 2582-6433
  • Access Open Access
  • License CC 4.0

All research articles published in International Journal for Legal Research and Analysis are open access and available to read, download and share, subject to proper citation of the original work.

Creative Commons

Disclaimer: The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of International Journal for Legal Research and Analysis.