October 16, 2024

Changelog Image

O1 Models: Support Added with Token and Cost Tracking

Immediate Support for OpenAI’s o1 Models

We’re excited to announce support for OpenAI’s new o1 models, along with comprehensive tracking of token counts and spending.

What Are o1 Models?

OpenAI’s o1 models represent a significant advancement in language AI. They use reinforcement learning to perform complex reasoning tasks, generating an internal chain of thought before producing a final response. This leads to enhanced performance and new capabilities for your applications.

Accurate Cost Tracking

Our platform now fully supports cost tracking for o1 model usage. Due to the unique way these models process information, it’s important to provide token counts for both input and output to ensure accurate cost calculations.

How to Ensure Accurate Tracking

  • Using Integrations: If you’re using integrations like Langchain, LlamaIndex, or LiteLLM, token usage is automatically tracked.
  • Streaming Usage: For accurate cost calculation while streaming, refer to our guide on Correct Cost Calculation While Streaming.

Learn More About o1 Models

October 2, 2024

Introducing new NPM packages for Helicone

We are thrilled to announce the addition of two essential npm packages: @helicone/async and @helicone/helpers. Additionally, we are also deprecating the @helicone/helicone package.

Why These Changes?

  • Optimized Package Size: The previous @helicone/helicone was wrapped around OpenAI, resulting in a bulky package size.
  • Enhanced Function Utilization: Many functions within the old package were unused and outdated. The new approach ensures that only necessary functions are included and are up to date.

Detailed Changes

  • Deprecated @helicone/helicone:

    • This package is officially deprecated and will no longer receive updates.
    • Existing functions within this package will continue to operate as expected to ensure a smooth transition.
  • Added @helicone/async:

    • HeliconeAsyncLogger Class: Previously part of @helicone/helicone, this class is now housed within @helicone/async. It retains all existing functionalities, offering robust asynchronous logging capabilities.
  • Added @helicone/helpers:

    • HeliconeManualLogger Class: Moved from @helicone/helicone to @helicone/helpers, this class now adopts a more functional approach. Visit the docs to learn more.
    • Enhanced Features:

September 12, 2024

Datasets

Streamline your AI data organization and analysis with Helicone’s new Datasets feature. Designed for LLM developers and data scientists, this tool simplifies data handling for improved AI model performance.

Key Features of Helicone Datasets:

  1. Dataset Creation: Quickly set up and organize your AI training data within the requests page.
  2. Export: Easily export your data as JSONL for training or finetuning.
  3. Edit: Edit your dataset and save it as a new version.

Benefits for AI Development:

To begin using the Datasets feature:

  1. Navigate to the Requests page in your Helicone dashboard.
  2. Enter select mode by clicking the select icon in the top right corner.
  3. Select the data points you want to include in your dataset.
  4. Click on “Create Dataset” and give it a name.
  5. Access your datasets from the new Datasets tab to export or edit as needed.

September 11, 2024

Changelog Image

Collapsible Sidebar

Enhance your workflow with our new collapsible sidebar feature. Users can now easily toggle the sidebar visibility, maximizing screen real estate and improving focus. This update offers:

  • One-click sidebar collapse/expand
  • Increased workspace flexibility
  • Improved screen space utilization
  • Seamless transition between full and minimized views

Optimize your productivity by customizing your interface on demand. Experience a cleaner, more adaptable workspace with our latest sidebar enhancement.

September 10, 2024

Changelog Image

Slack Alerts

Real-Time Alerts Now Available in Slack for Faster Issue Resolution

Stay on top of critical issues with Helicone’s latest update: Slack Integration for Alerts. In addition to email notifications, you can now receive real-time alerts directly in your Slack workspace for faster action when something goes wrong. To get started, visit the Alerts page to create or edit an alert. Enhance your team’s productivity by responding to key notifications without delay.

August 29, 2024

Changelog Image

#1 Product of the Day on Product Hunt

Helicone Reaches #1 on Product Hunt!

This achievement reflects our team’s hard work and the incredible support from our community. We’re thrilled about the boost in visibility for our platform!

Highlights:

  • #1 on Product Hunt’s daily leaderboard
  • Positive feedback from the open-source community
  • Surge in new user sign-ups and engagement

Product Hunt Results

A huge thank you to everyone who upvoted, commented, and shared Helicone. Your support motivates us to keep improving!

For more on our Product Hunt journey, check out our blog posts:

Links:

Product Hunt: Helicone on Product Hunt

August 25, 2024

Changelog Image

Docker images on Docker Hub

Docker images now available on Docker Hub We’ve started publishing Docker images on Docker Hub.

This update simplifies Helicone deployment on platforms that don’t natively support the Google Container Registry. For detailed instructions, please refer to our updated self-hosting guide.

Links:

Docker Hub: helicone

August 12, 2024

Changelog Image

New hpstatic Function for Static Prompts in LLM Applications

We’ve added a new hpstatic function to our Helicone Prompt Formatter (HPF) package. This function allows users to create static prompts that don’t change between requests, which is particularly useful for system prompts or other constant text. The hpstatic function wraps the text in <helicone-prompt-static> tags, indicating to Helicone that this part of the prompt should not be treated as variable input.

Here’s a quick example of how to use hpstatic:

import { hpf, hpstatic } from "@helicone/prompts";

const systemPrompt = hpstatic`You are a helpful assistant.`;
const userPrompt = hpf`Write a story about ${{ character }}`;

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: userPrompt },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

This new feature enhances our prompt management capabilities, allowing for more flexible and efficient prompt structuring in your applications.

Start Using Static Prompts 🚀

August 9, 2024

Changelog Image

Ragas Integration for RAG System Evaluation

We’re excited to announce our integration with Ragas, an open-source framework for evaluating Retrieval-Augmented Generation (RAG) systems. This integration allows you to:

  • Monitor and analyze the performance of your RAG pipelines
  • Gain insights into RAG effectiveness using metrics like faithfulness, answer relevancy, and context precision
  • Easily identify areas for improvement in your RAG systems

Check out this quick video overview of the Ragas integration:

To get started with the Ragas integration, visit our documentation for step-by-step instructions and code examples.

August 6, 2024

Changelog Image

Optimistic Updates & Asynchronous Loading in Requests Page

We’ve improved data loading in the Requests page of the Helicone platform. By fetching metadata and request bodies separately and loading data asynchronously we’ve reduced the time it takes to render large tables by almost 6x, improving speed and UX.

July 26, 2024

Changelog Image

New Assistants UI Playground

We’re thrilled to announce a major update to our Assistants UI Playground! Head to the Playground and click the “Try New Playground” button to explore the latest improvements:

  • Streamed responses for real-time interaction
  • Enhanced tool rendering for better visualization
  • Improved reliability for a smoother experience

Coming soon:

  • Expanded model support
  • Advanced prompt management
  • Integrated Markdown editor

Try out the new Playground today and elevate your LLM testing experience!

July 24, 2024

Changelog Image

Fireworks AI + Helicone

We’re excited to announce our integration with Fireworks AI, the high-performance LLM platform! Enhance your AI applications with Helicone’s powerful observability tools in just two easy steps:

  1. Generate a write-only API key in your Helicone account.
  2. Update your Fireworks AI base URL to:
    https://fireworks.helicone.ai
    

That’s all it takes! Now you can monitor, analyze, and optimize your Fireworks AI models with Helicone’s comprehensive insights.

For more details, check out our Fireworks AI integration guide.

July 23, 2024

Changelog Image

Dify + Helicone

We’re thrilled to announce our integration with Dify, the open-source LLM app development platform! Now you can easily add Helicone’s powerful observability features to your Dify projects in just two simple steps:

  1. Generate a write-only API key in your Helicone account.
  2. Set your API base URL in Dify to:
    https://oai.helicone.ai/<API_KEY>
    

That’s it! Enjoy comprehensive logs and insights for your Dify LLM applications.

Check out our integration guide for more details.

July 22, 2024

Changelog Image

Prompts package

We’re excited to announce the release of our new @helicone/prompts package! This lightweight library simplifies prompt formatting for Large Language Models, offering features like:

  • Automated versioning with change detection
  • Support for chat-like prompt templates
  • Efficient variable handling and extraction

Check it out on GitHub and enhance your LLM workflow today!