Nate Lebel is a strategic leader in data-driven transformation with over 24 years of experience in transforming organizations through solution architecture, enterprise data strategy, and MLOps.
His talk is on:
Abstract:
In the rapidly evolving landscape of artificial intelligence (AI), effective operations management for machine learning (ML) and large language models (LLMs) is critical. While new models and methods improve LLMs and agentic systems, few platform-agnostic tools exist for unified system evaluation. We integrate our tools with widely used cloud services and choose open-source tools for portability. We offer a complete open-source LLMOps and MLOps platform for efficient Kubernetes deployment, streamlining AI workflow integration and management at scale.
Our platform promotes accelerated innovation by enabling faster experimentation and iteration cycles. Through enhanced productivity, teams can automate and streamline their end-to-end ML lifecycle processes, from data ingestion to model deployment. With improved decision-making facilitated by robust model monitoring and evaluation tools, stakeholders can gain deeper insights and make more informed choices. Additionally, our open-source approach opens up new opportunities for collaboration and customization, allowing organizations to tailor solutions to their unique needs.
Central to our modular architecture is an intuitive user interface that ushers users through the entire end-to-end MLOps cycle. Our platform integrates tools for AI asset discovery, registration, validation, evaluation, monitoring, and assessment to facilitate centralized ML management. Behind this UI is our platform-agnostic Kubernetes environment, allowing users to easily manage new datasets with LakeFS, run dataset and model validation pipelines with Kubeflow, and collect training and deployment metrics with MLFlow. To cover staging and deployment, our monitoring tools, coupled with KServe’s inference server and the Kubeflow training operator, complete a centralized end-to-end ML development and deployment ecosystem.
Attendees will learn the ease with which AI centralization can be achieved via the discovery and registration of datasets and models as they deploy this distribution via our GitHub repository. We’ll dive into our data and model monitoring ecosystems. Attendees will get experience with our data provenance tools like LakeFS and transparency-enhancing monitoring tools such as MLFlow and AIM. Join us if you are interested in using open-source technologies to build a strong, cost-effective MLOps and LLMOps infrastructure.
This post has no comments.
Commenting on blog posts requires an account.
Login is required to interact with this comment. Please and try again.
Nate Lebel will be presenting at ODSC East Conference on May 15th
For more information visit: ODSC East on May 15th.
Nate Lebel is a strategic leader in data-driven transformation with over 24 years of experience in transforming organizations through solution architecture, enterprise data strategy, and MLOps.
His talk is on:
Abstract:
In the rapidly evolving landscape of artificial intelligence (AI), effective operations management for machine learning (ML) and large language models (LLMs) is critical. While new models and methods improve LLMs and agentic systems, few platform-agnostic tools exist for unified system evaluation. We integrate our tools with widely used cloud services and choose open-source tools for portability. We offer a complete open-source LLMOps and MLOps platform for efficient Kubernetes deployment, streamlining AI workflow integration and management at scale.
Our platform promotes accelerated innovation by enabling faster experimentation and iteration cycles. Through enhanced productivity, teams can automate and streamline their end-to-end ML lifecycle processes, from data ingestion to model deployment. With improved decision-making facilitated by robust model monitoring and evaluation tools, stakeholders can gain deeper insights and make more informed choices. Additionally, our open-source approach opens up new opportunities for collaboration and customization, allowing organizations to tailor solutions to their unique needs.
Central to our modular architecture is an intuitive user interface that ushers users through the entire end-to-end MLOps cycle. Our platform integrates tools for AI asset discovery, registration, validation, evaluation, monitoring, and assessment to facilitate centralized ML management. Behind this UI is our platform-agnostic Kubernetes environment, allowing users to easily manage new datasets with LakeFS, run dataset and model validation pipelines with Kubeflow, and collect training and deployment metrics with MLFlow. To cover staging and deployment, our monitoring tools, coupled with KServe’s inference server and the Kubeflow training operator, complete a centralized end-to-end ML development and deployment ecosystem.
Attendees will learn the ease with which AI centralization can be achieved via the discovery and registration of datasets and models as they deploy this distribution via our GitHub repository. We’ll dive into our data and model monitoring ecosystems. Attendees will get experience with our data provenance tools like LakeFS and transparency-enhancing monitoring tools such as MLFlow and AIM. Join us if you are interested in using open-source technologies to build a strong, cost-effective MLOps and LLMOps infrastructure.
Commenting on blog posts requires an account.
Login is required to interact with this comment. Please and try again.
If you do not have an account, Register Now.