Edit

File upload

Telefunc supports File and Blob arguments — you can pass them to a telefunction just like any other argument. Any signature works: single file, multiple files, File[] arrays, mixed with other arguments — it's completely transparent.

When a telefunction call contains files, Telefunc automatically switches from JSON to multipart/form-data.

Example

// FileUpload.telefunc.js
// Environment: server
 
import fs from 'node:fs'
 
export async function onUpload(file, description) {
  // Stream to disk — constant memory usage, no matter the file size
  const writable = fs.createWriteStream(`./uploads/${file.name}`)
  for await (const chunk of file.stream()) {
    writable.write(chunk)
  }
  writable.end()
 
  console.log(`Saved ${file.name} (${file.size} bytes): ${description}`)
}
// FileUpload.telefunc.ts
// Environment: server
 
import fs from 'node:fs'
 
export async function onUpload(file: File, description: string) {
  // Stream to disk — constant memory usage, no matter the file size
  const writable = fs.createWriteStream(`./uploads/${file.name}`)
  for await (const chunk of file.stream()) {
    writable.write(chunk)
  }
  writable.end()
 
  console.log(`Saved ${file.name} (${file.size} bytes): ${description}`)
}
// FileUpload.jsx
// Environment: client
 
import { onUpload } from './FileUpload.telefunc'
 
function FileUpload() {
  return (
    <form
      onSubmit={async (e) => {
        e.preventDefault()
        const form = new FormData(e.currentTarget)
        const file = form.get('file')
        const description = form.get('description')
        await onUpload(file, description)
      }}
    >
      <input name="file" type="file" />
      <input name="description" type="text" placeholder="Description" />
      <button type="submit">Upload</button>
    </form>
  )
}
// FileUpload.tsx
// Environment: client
 
import { onUpload } from './FileUpload.telefunc'
 
function FileUpload() {
  return (
    <form
      onSubmit={async (e) => {
        e.preventDefault()
        const form = new FormData(e.currentTarget)
        const file = form.get('file') as File
        const description = form.get('description') as string
        await onUpload(file, description)
      }}
    >
      <input name="file" type="file" />
      <input name="description" type="text" placeholder="Description" />
      <button type="submit">Upload</button>
    </form>
  )
}

Reading strategies

Each file argument is a standard File / Blob object:

MethodMemory usageUse case
file.stream()Low — chunk size(1)Pipe to disk, S3, etc.
file.arrayBuffer()High — file size(2)Process in memory
file.text()High — file size(2)Read text content

(1): Only a single file chunk at a time is loaded in memory. Thus, memory consumption is low and constant (the chunk size). Recommended if files are expected to be large.
(2): The whole file is loaded in memory. For large files (e.g. a large videos) this leads to prohibitively high memory usage.

Limitations

For best performance and efficiency, nothing is buffered internally — file bytes flow directly from the HTTP stream to your code. If you don't read a file argument, the file bytes never leave the sender (the user's browser).

This zero-buffering design comes with following inherent limitations.

One-shot reads

Each file can only be read once — calling .stream(), .text(), or .arrayBuffer() a second time throws an error.

The HTTP stream isn't buffered and can therefore only be consumed once.

If you need the data multiple times, buffer it into a variable first.

Read in order

When a telefunction has multiple file arguments (e.g. file1 and file2), they must be read in the order they appear in the function signature.

Reading out of order causes file1 to be discarded (with a warning).

That's because all files share a single forward-only HTTP stream (reading file2 before file1 would require buffering file1 in memory).

That said, you don't need to await each file before starting reading the next — you can kick off reads concurrently (e.g. Promise.all([file1.text(), file2.text()])) and they will be automatically streamed in the correct order.

// ✅ Works
await file1.text()
await file2.text()
 
// ❌ Doesn't work
await file2.text()
await file1.text()
// ✅ Works
file1.text()
file2.text()
 
// ❌ Doesn't work
file2.text()
file1.text()
// ✅ Works
await Promise.all([
  file1.text()
  file2.text()
])
 
// ❌ Doesn't work
await Promise.all([
  file2.text()
  file1.text()
])
// ✅ Works
await Promise.all([
  file1.stream().pipeTo(..)
  file2.stream().pipeTo(..)
])
 
// ❌ Doesn't work
await Promise.all([
  file2.stream().pipeTo(..)
  file1.stream().pipeTo(..)
])

Server integration

Pass the Request object directly:

const httpResponse = await telefunc({ request })

Alternatively, with Express, Fastify, or any Node.js framework, you can pass the request as a Readable stream along with the Content-Type header:

app.all('/_telefunc', async (req, res) => {
  const httpResponse = await telefunc({
    url: req.originalUrl,
    method: req.method,
    readable: req,
    contentType: req.headers['content-type'] || ''
  })
  res.status(httpResponse.statusCode).type(httpResponse.contentType).send(httpResponse.body)
})
app.all('/_telefunc', async (req, res) => {
  const httpResponse = await telefunc({
    url: req.originalUrl,
    method: req.method,
    readable: req,
    contentType: req.headers['content-type'] || '',
  })
  res.status(httpResponse.statusCode).type(httpResponse.contentType).send(httpResponse.body)
})

How it works

You can skip reading this section. Read this only if you're curious.

Telefunc uses a custom multipart stream parser — files are not buffered into memory on the server.

  1. The client serializes the telefunction call into a multipart/form-data request. File/Blob arguments are replaced with placeholder descriptors and sent as separate binary parts.
  2. On the server, Telefunc parses the metadata first, then creates lazy File/Blob objects that reference the HTTP body stream without reading it yet.
  3. When your telefunction calls file.stream(), file.text(), or file.arrayBuffer(), the bytes are pulled directly from the HTTP stream on demand.

This means file bytes only flow through memory when you read them — and if you stream to disk, memory consumption is constant regardless of file size.