Before we release our product, we want to think through worst-case scenarios.

  • What happens if our API requests fail?
  • What if users can't log in?
  • How will I know my application is running successfully?


Observability & monitoring are two key pillars of understanding your system. In the world of SaaS, observability means you can answer any questions about what’s happening on the inside of the system just by observing from the outside. There's no need to ship new code.

Having a static site drastically reduces the amount of complexity in our deployment setup. However, we still have serverless functions in play, and rely on third-party solutions. There is still a chance things can go wrong. When they do, we'd like to be prepared to solve the issue promptly.

With an "observable system", we have the tools to understand what's happening in one query. Simple ask a question and learn more about how our software is working.

Adding Logging

Before we can query our data, we need to have data. We want to log out relevant information for critical events in our system – a live stream of events.

There are many ways to add logging to SaaS applications, each with their own tradeoffs. For this course, I'll focus on a solution that will allow you to plug-and-play the provider of your choice. Then, I'll recommend my favorite solution for the opinionated stack.

With most logging providers, anything you console.log will show up. For example, your basic log might look something like this.

console.log('Completed successfully!')
console.error('500: Request failed.')

You can also have structured logs through JSON objects. Certain logging providers will parse the object and provide you access to query individual fields. For example, with Node.js you might use Pino.

import logger from 'pino'
user: {
name: 'Lee Robinson',
email: 'me@leerob.io',
id: 'hlk436k4l26h2',
event: {
type: 'request',
tag: 'api',

Which then gives you a structured object like this.

"level": 30,
"time": 1531171082399,
"pid": 657,
"hostname": "",
"user": {
"name": "Lee Robinson",
"email": "me@leerob.io",
"id": "hlk436k4l26h2"
"event": {
"type": "request",
"tag": "api"


Logflare is my favorite way to handle logs for applications deployed to Vercel. With their Vercel Integration, it makes it simple to go from console.log to structured logs you can query. Their free tier is very generous too (5,200,000 event logs per month).

Logflare automatically parses information from your serverless functions to you can see how long your requests take. With normal application logs, you don't get any information about what's cached. With Logflare + Vercel, you do and it's structure and queryable. Another huge advantage of Logflare is that it's backed by BigQuery. It's significantly cheaper than other SaaS logging products out there and allows you to "bring your own backend" if you want. No vendor lock-in!

Setting up the Vercel Integration

Vercel gives us three main kinds of logs: build, static and lambda. Thinking about our Next.js app, this translates to:

  • build -> next build
  • static -> pages/index.js
  • lambda -> pages/api/hello.js

Installing & Using Pino Logflare

In addition to the system logs generated by Vercel, you'll also want to add custom logging when things go wrong.

First, let's install the necessary libraries in our application.

$ yarn add pino pino-logflare

Then, retrieve the Ingest API Key and the Stream ID from your dashboard and add them as environment variables.



These keys can be made public since they don't allow for any sort of account management or querying access. They're similar to a unique identifier for analytics.

Next, let's set up a logging utility that will initialize pino-logflare and forward the correct environment variables. We also set up some default options, as well as forwarding the environment and Git commit SHA. formatObjectKeys helps ensure that - is replaced with _ for storage in Logflare.


import pino from 'pino'
import { logflarePinoVercel } from 'pino-logflare'
const { stream, send } = logflarePinoVercel({
apiKey: process.env.NEXT_PUBLIC_LOGFLARE_KEY,
sourceToken: process.env.NEXT_PUBLIC_LOGFLARE_STREAM,
const logger = pino(
browser: {
transmit: {
send: send,
level: 'debug',
base: {
env: process.env.NODE_ENV || 'ENV not set',
revision: process.env.VERCEL_GITHUB_COMMIT_SHA,
const formatObjectKeys = (headers) => {
const keyValues = {}
Object.keys(headers).map((key) => {
const newKey = key.replace(/-/g, '_')
keyValues[newKey] = headers[key]
return keyValues
export { logger, formatObjectKeys }

Finally, we can use our logger anywhere we want. For example, let's log out what error we receive inside a try/catch block in an API route.


import { auth } from '@/lib/firebase-admin'
import { getUserSites } from '@/lib/db-admin'
import { logger, formatObjectKeys } from '@/utils/logger'
export default async (req, res) => {
try {
const { uid } = await auth.verifyIdToken(req.headers.token)
const { sites } = await getUserSites(uid)
res.status(200).json({ sites })
} catch (error) {
request: {
headers: formatObjectKeys(req.headers),
url: req.url,
method: req.method,
response: {
statusCode: res.statusCode,
res.status(500).json({ error })