The Future of Finance is Written Here.

Digits Presented Lessons Learned from Deploying Large Language Models

Tucked away in the scenic splendor of Asheville, North Carolina, the AI in Production conference was held on July 19, 2024, bringing together some of the brightest minds in machine learning and artificial intelligence. The intimate gathering of 85 participants included ML experts and AI enthusiasts from renowned tech giants like Tesla, Intuit, Ramp, GitHub, and Digits. The focus of this event was clear: to delve deep into the real-world, production-level applications of machine learning, with a keen interest in the burgeoning field of Large Language Models (LLMs).

Digits Presented Lessons Learned from Deploying Large Language Models in Asheville, NC"

Unlike many conferences plagued by buzzwords and marketing fluff, AI in Production distinguished itself by being refreshingly genuine. The sessions were dedicated to practical machine learning solutions, steering clear of hype-driven narratives. It was a platform where every presenter showcased solutions to their problems, fostering an environment of learning, problem-solving, and genuine curiosity.

Unearthing Practical Solutions

The conference's essence lay in its authenticity. Each presentation was rooted in real-world experiences, offering attendees valuable insights into the successes and challenges of deploying large language models in production environments. Here are three presentations that stood out to Digits’ ML team.

JAX vs. PyTorch: A Comparative Perspective by Tesla

Sujay S Kumar, a Tesla engineer, delivered one of the standout talks and embarked on a detailed comparison between JAX and PyTorch, two leading frameworks in the machine learning landscape. The session was rich with technical insights, shedding light on the nuances of each framework. The engineer discussed the strengths and weaknesses of JAX in terms of flexibility and performance optimization, juxtaposed with PyTorch’s robust ecosystem and user-friendly interface. This comparative analysis equipped the audience with the knowledge to make informed decisions depending on their specific project needs and infrastructural considerations.

Ethical Implementation: Insights from GitHub

The senior Engineering Director, Christina Entcheva from GitHub, led a thought-provoking session emphasizing the ethical dimensions of deploying large language models. As these models become increasingly pervasive, the ethical implications surrounding data privacy, algorithmic bias, and societal impact are paramount. Christina fostered a sense or urgency regarding model bias and ensuring AI systems' fairness, transparency, and accountability. This talk was a timely reminder of the importance of ethical considerations, resonating deeply with the audience and sparking meaningful discussions on integrating ethical practices into everyday AI operations.

Tackling Data Curation: Best Practices from Dendra Systems’ Senior Data Scientist

Another illuminating talk was given by Richard Decal, a Senior Data Scientist from Dendra Systems, who deep dived into the challenges of curating production data sets once a machine learning model is deployed. The talk focused on the data drift phenomenon—where the target variable's statistical properties change over time, rendering the model less accurate. The data scientist shared valuable strategies for monitoring and mitigating data drift, emphasizing the importance of continuous evaluation and adjustment of data pipelines to maintain model performance. This session was particularly beneficial for practitioners seeking to enhance the robustness and reliability of their deployed models.

Beyond the Sessions: Meaningful Interactions

One of the conference's highlights was the vibrant conversations during and after the sessions. The atmosphere was charged with intellectual curiosity as participants explored the latest trends in machine learning. The intimate setting facilitated meaningful networking opportunities, allowing attendees to connect more personally.

The conference's practical focus enabled attendees to go beyond theoretical knowledge and engage with tangible solutions. Every participant left the conference with actionable takeaways, ready to implement the insights gained into their projects.

A Glimpse into the Future: Anticipation for Next Year

The resounding success of the AI in Production conference has left participants eagerly anticipating the next edition. Witnessing the cutting-edge advancements in large language models and their practical applications was a rare opportunity. The conference's genuine, no-nonsense approach ensured that every moment spent was valuable, fostering a culture of learning, innovation, and ethical AI deployment.

A Heartfelt Thank You

A significant part of the conference’s success can be attributed to the meticulous planning and execution by Julio Baros and the dedicated team of volunteers. Their efforts in organizing and facilitating the event were commendable, ensuring a seamless experience for all attendees. The invitation extended to Digit’s ML team was greatly appreciated, and the team was honored to be part of such a prestigious gathering.

Conclusion

The AI in Production conference in Asheville, NC, on July 19, 2024, was more than just an event; it was a confluence of brilliant minds, innovative solutions, and forward-thinking discussions. Focusing on practical, production-level applications of large language models provided attendees with a treasure trove of knowledge and insights. The captivating presentations, engaging conversations, and the stunning backdrop of Asheville made it an unforgettable experience.

We at Digits are already looking forward to returning next year, eager to continue the journey of learning and discovery in the ever-evolving landscape of machine learning and artificial intelligence.

In the meantime, let’s carry forward the lessons learned, implement the best practices shared, and strive for excellence in deploying large language models. A big thank you once again to Julio Baros and all the conference volunteers for making this event a remarkable success. Here’s to pushing the boundaries of AI and machine learning, one practical solution at a time.

Interested in learning more about Digits' presentation at the AI in Production conference? Stayed tuned for our upcoming blog post, where we delve deeper into our lessons learned from deploying large language models in production environments.

Digits at Google I/O’24

Digits at Google I/O'24: A Fusion of Innovation and Collaboration

The Google I/O and the Google Developer Conference held in Mountain View, California, have always been a beacon of new technology and innovation, and 2024 was no exception. Like last year, Digits had the privilege of being invited to participate in this global gathering of ML/AI experts. Our team of engineers was thrilled and honored to be a part of such a dynamic and forward-thinking event.

Engaging with the Developers Advisory Board

One of the key highlights for us was participating in Google’s Developer Advisory Board meeting. This not only provided us with a platform to share our insights but also allowed us to exchange ideas with Google's Developer X group and learn about upcoming products.

A Closer Look at Google’s Innovations

From Digits' perspective, several announcements and tools stood out, each promising to significantly impact our journey with machine learning and artificial intelligence. Here’s a rundown of the highlights:

Gemma 2: A Leap Forward for Open Source LLMs

Google unveiled Gemma 2, a new model designed to enhance the capabilities of open-source large language models (LLMs). What makes Gemma 2 truly remarkable is its optimization for specific instance types, which will help reduce costs and improve hardware utilization. This is a significant advancement, as it enables more efficient and cost-effective deployment of ML models, a crucial factor for any tech-driven company.

Gemma 2

Responsible Generative AI Toolkit

Another noteworthy introduction was Google's Responsible Generative AI Toolkit. This comprehensive toolkit provides resources to apply best practices for responsible use of open models like the Gemma series. It includes:

  • Guidance on Setting Safety Policies: Frameworks and guidelines for establishing robust safety policies when deploying AI models.
  • Safety Tuning and Classifiers: Tools for fine-tuning safety mechanisms to ensure that AI behaves as intended.
  • Model Evaluation: Metrics and methodologies for thorough evaluation of model safety.
  • Learning Interpretability Tool (LIT): This tool enables developers to investigate the behavior of models like Gemma and address potential issues. It offers a deeper understanding of how models make decisions, which is crucial for transparency and trustworthiness.
  • Methodology for Building Robust Safety Classifiers: Techniques to develop effective safety classifiers even with minimal examples, ensuring that AI systems can operate reliably in diverse scenarios.

LLM Comparator: A Visualization Tool for Model Comparison

The LLM Comparator is another brilliant tool that grabbed our attention. It is an interactive visualization instrument designed to analyze LLM evaluation results side-by-side. This tool facilitates qualitative analysis of how responses from two models differ, both at example- and slice-levels. For engineers and developers, this means more insightful comparisons and a stronger ability to refine and improve their models.

Reflecting on Our Experience

Being invited to Google I/O once again, especially being part of the Developer Advisory Board meeting for the second consecutive year, is a testament to the growing partnership and mutual respect between Digits and Google. We are thankful for this opportunity and excited about the collaborations and advancements that will emerge from these engagements.

Our time at Google I/O’24 was not only inspiring but also a powerful reminder of the incredible pace at which technology evolves. With tools like Gemma 2, the Responsible Generative AI Toolkit, and the LLM Comparator, we are on the brink of a new era in AI and ML development. At Digits, we look forward to integrating these innovations into our work and harnessing their potential to create transformative solutions.

Big thanks goes out to the Jeanine Banks and the entire Google team for hosting us at the Google Developer Advisory Board meeting.

* Image credits: Google

University of Washington Lecture on GenAI in Finance

In April, Digits' expert machine-learning team was invited to conduct a lecture at the University of Washington. The event occurred at the Foster School of Business and was attended by a mixed crowd of students and faculty alike.

75 students flocked to the lecture, demonstrating the growing interest in these ground-breaking technologies, such as machine learning, that are paving future paths in finance. Undeniably, the turnout indicated the growing curiosity about practical applications of machine learning in the world of finance.

The lecture provided an overview of machine learning and Generative AI (GenAI) and explored their impacts in the finance sector. Attendees delved deep into understanding GenAI's specific use cases in finance, with our team sharing their exhaustive research findings and experienced insights to provide a wider perspective of GenAI's potential role in revolutionizing traditional accounting methods.

The University of Washington's proactive approach in inviting the Digits team and the hearty attendance underlines the increasing investment and gravitation towards AI technologies in finance. This trend is expected to continue as technology continues to weave its way into the world of finance.

In case you missed it, you can access the lecture slides below to help better understand this technological revolution.

Digits at Google Next’24

We're excited to share the highlights from our recent participation at Google Next’24 on April 9 and 10, where we showcased Digits at the NVIDIA booth. This event provided us with an unparalleled platform to demonstrate our cutting-edge machine learning models, which included the first in the world to handle double-entry accounting effectively. This is a product of our robust partnership with NVIDIA, which we are happy to highlight today.

Our collaboration with NVIDIA, a leading powerhouse in GPU technology, has been instrumental in powering Digits' machine learning initiatives. With NVIDIA's support and vast tech resources, we have been able to build a state-of-the-art, secure, and private machine-learning infrastructure that has revolutionized the way we handle our double-entry accounting system. This partnership signifies an important milestone in our journey of harnessing machine learning to solve real-world business problems.

Show casing Digits at the NVIDIA booth at Google Next'24

Our sessions at the NVIDIA booth offered us a unique opportunity to meet and engage with our current customers. It was a privilege to demonstrate how Digits supports startup founders by simplifying their financial processes and helping them understand their financial health. Feedback from customers during these sessions reaffirmed the benefits of our solutions in assisting startups in managing their finances with greater ease.

Show casing Digits at the NVIDIA booth at Google Next'24

In addition to showcasing our technology, Google Next’24 was a fantastic opportunity for us to connect with Google experts. These interactions enabled us to gain valuable insights and learnings that we hope to incorporate into our future projects.

We are also excited to dive deep into state-of-the-art open-source machine learning projects at Google Next, like Gemma and JAX. These tools hold significant potential. Stay tuned as we will share more details on this in our upcoming blog posts.

In conclusion, our participation at Google Next’24 reinforced some of our fundamental beliefs - that collaboration fuels innovation, direct customer engagement is invaluable, and continued learning and exploration is a powerful tool for growth. We remain committed to leveraging the potential of machine learning to simplify business finances and believe that with partners like NVIDIA and platforms like Google Cloud, we are well on our path.

A special shout out is due to Michael Thompson, Bailey Blake, Matthew Varacalli, and Martha Aparicio from NVIDIA for this tremendous opportunity. We are already looking forward to next year's event.

Digits at Google Next 23

Every year, Google invites customers and major product partners to their Cloud conference, Google Next. After a multi-year in-person hiatus, Google Next returned in full force to San Francisco’s Moscone Center, and Digits was invited to present how we’ve collaborated with teams at Google to create Digits AI.

Given our experience with Vertex AI across many ML projects at Digits, presenting at Next provided a unique opportunity to showcase how we have been working to push finance and accounting software forward, and also share our experiences in developing machine learning and AI using Google Cloud products.

🤖 Getting Early Access

In the weeks leading up to the conference, our engineering team received early and exclusive access to Google Cloud’s latest release of their Vertex Python SDK. This allows remote execution of machine learning model training or model analysis, all controlled via a local Jupyter notebook. In the coming weeks, we’ll share a more in-depth post, with detailed explanations and feedback on our experience using the new product. But for now, we’ve included a summary of our initial findings as well as a video of our talk at Google Next where we discussed our experiences.

Initial Learnings

Vertex AI has been a fundamental element in building lean machine learning projects here at Digits. We’ve outlined some of the various use cases which were also discussed in more detail during our Next talk:

  • Vertex Pipelines → Any machine learning model in production is trained, evaluated and registered via CI-driven ML pipelines.
  • Vertex Metadata Store → During the model training, any produced pipeline artifact (e.g. the training set, or the preprocessed training data is archived through the metadata store).
  • Vertex Model Registry → Any positively evaluated, trained machine learning model produced by our machine learning pipelines is registered in a one-stop shop for future consumption.
  • Vertex Online Prediction Endpoints → Data pipelines or backend APIs can access the machine learning models through batch processes or online prediction endpoints.
  • Vertex Matching Streaming Enginex → Generated embeddings are made available through the embedding database service in Vertex, called matching engine.

Presenting at Google Next is an experience that outlines the true value of sharing information and learning from others in the industry. This event gave us a platform to share our knowledge with other customers and offer insights into our work and, conversely, we were privileged enough to glean wisdom from some of the industry’s most respected leaders in AI/ML as they shared their experiences and successes using Google products.

A special shout out is due to Sara Robinson, Chris Cho, Melanie Ratchford, and Esther Kim for this tremendous opportunity. We are already looking forward to next year's event in Las Vegas.

  • <<
  • Page: 1
  • Page: 2
  • Page: 3
  • >>