Skip to content

Commit 39e5c1f

Browse files
authored
feat (provider/luma): add Luma provider (#4516)
1 parent e34c7c2 commit 39e5c1f

35 files changed

+1971
-65
lines changed

.changeset/thin-rice-drum.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
'@ai-sdk/luma': patch
3+
---
4+
5+
feat (provider/luma): add Luma provider

.changeset/weak-bobcats-wink.md

+6
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
'@ai-sdk/provider-utils': patch
3+
'@ai-sdk/fireworks': patch
4+
---
5+
6+
feat (provider-utils): add getFromApi and response handlers for binary responses and status-code errors

CHANGELOG.md

+1
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ You can find the changelogs for the individual packages in their respective `CHA
1717
- [@ai-sdk/google](./packages/google/CHANGELOG.md)
1818
- [@ai-sdk/google-vertex](./packages/google-vertex/CHANGELOG.md)
1919
- [@ai-sdk/groq](./packages/groq/CHANGELOG.md)
20+
- [@ai-sdk/luma](./packages/luma/CHANGELOG.md)
2021
- [@ai-sdk/mistral](./packages/mistral/CHANGELOG.md)
2122
- [@ai-sdk/openai](./packages/openai/CHANGELOG.md)
2223
- [@ai-sdk/openai-compatible](./packages/openai-compatible/CHANGELOG.md)

content/docs/03-ai-sdk-core/35-image-generation.mdx

+2
Original file line numberDiff line numberDiff line change
@@ -225,3 +225,5 @@ try {
225225
| [Fireworks](/providers/ai-sdk-providers/fireworks#image-models) | `accounts/fireworks/models/playground-v2-1024px-aesthetic` | 640x1536, 768x1344, 832x1216, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640 |
226226
| [Fireworks](/providers/ai-sdk-providers/fireworks#image-models) | `accounts/fireworks/models/SSD-1B` | 640x1536, 768x1344, 832x1216, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640 |
227227
| [Fireworks](/providers/ai-sdk-providers/fireworks#image-models) | `accounts/fireworks/models/stable-diffusion-xl-1024-v1-0` | 640x1536, 768x1344, 832x1216, 896x1152, 1024x1024, 1152x896, 1216x832, 1344x768, 1536x640 |
228+
| [Luma](/providers/ai-sdk-providers/luma#image-models) | `photon-1` | 1:1, 3:4, 4:3, 9:16, 16:9, 9:21, 21:9 |
229+
| [Luma](/providers/ai-sdk-providers/luma#image-models) | `photon-flash-1` | 1:1, 3:4, 4:3, 9:16, 16:9, 9:21, 21:9 |
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,243 @@
1+
---
2+
title: Luma
3+
description: Learn how to use Luma AI models with the AI SDK.
4+
---
5+
6+
# Luma Provider
7+
8+
[Luma AI](https://lumalabs.ai/) provides state-of-the-art image generation models through their Dream Machine platform. Their models offer ultra-high quality image generation with superior prompt understanding and unique capabilities like character consistency and multi-image reference support.
9+
10+
## Setup
11+
12+
The Luma provider is available via the `@ai-sdk/luma` module. You can install it with
13+
14+
<Tabs items={['pnpm', 'npm', 'yarn']}>
15+
<Tab>
16+
<Snippet text="pnpm add @ai-sdk/luma" dark />
17+
</Tab>
18+
<Tab>
19+
<Snippet text="npm install @ai-sdk/luma" dark />
20+
</Tab>
21+
<Tab>
22+
<Snippet text="yarn add @ai-sdk/luma" dark />
23+
</Tab>
24+
</Tabs>
25+
26+
## Provider Instance
27+
28+
You can import the default provider instance `luma` from `@ai-sdk/luma`:
29+
30+
```ts
31+
import { luma } from '@ai-sdk/luma';
32+
```
33+
34+
If you need a customized setup, you can import `createLuma` and create a provider instance with your settings:
35+
36+
```ts
37+
import { createLuma } from '@ai-sdk/luma';
38+
39+
const luma = createLuma({
40+
apiKey: 'your-api-key', // optional, defaults to LUMA_API_KEY environment variable
41+
baseURL: 'custom-url', // optional
42+
headers: {
43+
/* custom headers */
44+
}, // optional
45+
});
46+
```
47+
48+
You can use the following optional settings to customize the Luma provider instance:
49+
50+
- **baseURL** _string_
51+
52+
Use a different URL prefix for API calls, e.g. to use proxy servers.
53+
The default prefix is `https://api.lumalabs.ai`.
54+
55+
- **apiKey** _string_
56+
57+
API key that is being sent using the `Authorization` header.
58+
It defaults to the `LUMA_API_KEY` environment variable.
59+
60+
- **headers** _Record&lt;string,string&gt;_
61+
62+
Custom headers to include in the requests.
63+
64+
- **fetch** _(input: RequestInfo, init?: RequestInit) => Promise&lt;Response&gt;_
65+
66+
Custom [fetch](https://developer.mozilla.org/en-US/docs/Web/API/fetch) implementation.
67+
You can use it as a middleware to intercept requests,
68+
or to provide a custom fetch implementation for e.g. testing.
69+
70+
## Image Models
71+
72+
You can create Luma image models using the `.image()` factory method.
73+
For more on image generation with the AI SDK see [generateImage()](/docs/reference/ai-sdk-core/generate-image).
74+
75+
### Basic Usage
76+
77+
```ts
78+
import { luma } from '@ai-sdk/luma';
79+
import { experimental_generateImage as generateImage } from 'ai';
80+
import fs from 'fs';
81+
82+
const { image } = await generateImage({
83+
model: luma.image('photon-1'),
84+
prompt: 'A serene mountain landscape at sunset',
85+
aspectRatio: '16:9',
86+
});
87+
88+
const filename = `image-${Date.now()}.png`;
89+
fs.writeFileSync(filename, image.uint8Array);
90+
console.log(`Image saved to ${filename}`);
91+
```
92+
93+
### Image Model Settings
94+
95+
When creating an image model, you can customize the generation behavior with optional settings:
96+
97+
```ts
98+
const model = luma.image('photon-1', {
99+
maxImagesPerCall: 1, // Maximum number of images to generate per API call
100+
pollIntervalMillis: 5000, // How often to check for completed images (in ms)
101+
maxPollAttempts: 10, // Maximum number of polling attempts before timeout
102+
});
103+
```
104+
105+
Since Luma processes images through an asynchronous queue system, these settings allow you to tune the polling behavior:
106+
107+
- **maxImagesPerCall** _number_
108+
109+
Override the maximum number of images generated per API call. Defaults to 1.
110+
111+
- **pollIntervalMillis** _number_
112+
113+
Control how frequently the API is checked for completed images while they are
114+
being processed. Defaults to 500ms.
115+
116+
- **maxPollAttempts** _number_
117+
118+
Limit how long to wait for results before timing out, since image generation
119+
is queued asynchronously. Defaults to 120 attempts.
120+
121+
### Model Capabilities
122+
123+
Luma offers two main models:
124+
125+
| Model | Description |
126+
| ---------------- | ---------------------------------------------------------------- |
127+
| `photon-1` | High-quality image generation with superior prompt understanding |
128+
| `photon-flash-1` | Faster generation optimized for speed while maintaining quality |
129+
130+
Both models support the following aspect ratios:
131+
132+
- 1:1
133+
- 3:4
134+
- 4:3
135+
- 9:16
136+
- 16:9 (default)
137+
- 9:21
138+
- 21:9
139+
140+
For more details about supported aspect ratios, see the [Luma Image Generation documentation](https://docs.lumalabs.ai/docs/image-generation).
141+
142+
Key features of Luma models include:
143+
144+
- Ultra-high quality image generation
145+
- 10x higher cost efficiency compared to similar models
146+
- Superior prompt understanding and adherence
147+
- Unique character consistency capabilities from single reference images
148+
- Multi-image reference support for precise style matching
149+
150+
### Advanced Options
151+
152+
Luma models support several advanced features through the `providerOptions.luma` parameter.
153+
154+
#### Image Reference
155+
156+
Use up to 4 reference images to guide your generation. Useful for creating variations or visualizing complex concepts. Adjust the `weight` (0-1) to control the influence of reference images.
157+
158+
```ts
159+
// Example: Generate a salamander with reference
160+
await generateImage({
161+
model: luma.image('photon-1'),
162+
prompt: 'A salamander at dusk in a forest pond, in the style of ukiyo-e',
163+
providerOptions: {
164+
luma: {
165+
image_ref: [
166+
{
167+
url: 'https://example.com/reference.jpg',
168+
weight: 0.85,
169+
},
170+
],
171+
},
172+
},
173+
});
174+
```
175+
176+
#### Style Reference
177+
178+
Apply specific visual styles to your generations using reference images. Control the style influence using the `weight` parameter.
179+
180+
```ts
181+
// Example: Generate with style reference
182+
await generateImage({
183+
model: luma.image('photon-1'),
184+
prompt: 'A blue cream Persian cat launching its website on Vercel',
185+
providerOptions: {
186+
luma: {
187+
style_ref: [
188+
{
189+
url: 'https://example.com/style.jpg',
190+
weight: 0.8,
191+
},
192+
],
193+
},
194+
},
195+
});
196+
```
197+
198+
#### Character Reference
199+
200+
Create consistent and personalized characters using up to 4 reference images of the same subject. More reference images improve character representation.
201+
202+
```ts
203+
// Example: Generate character-based image
204+
await generateImage({
205+
model: luma.image('photon-1'),
206+
prompt: 'A woman with a cat riding a broomstick in a forest',
207+
providerOptions: {
208+
luma: {
209+
character_ref: {
210+
identity0: {
211+
images: ['https://example.com/character.jpg'],
212+
},
213+
},
214+
},
215+
},
216+
});
217+
```
218+
219+
#### Modify Image
220+
221+
Transform existing images using text prompts. Use the `weight` parameter to control how closely the result matches the input image (higher weight = closer to input but less creative).
222+
223+
<Note>
224+
For color changes, it's recommended to use a lower weight value (0.0-0.1).
225+
</Note>
226+
227+
```ts
228+
// Example: Modify existing image
229+
await generateImage({
230+
model: luma.image('photon-1'),
231+
prompt: 'transform the bike to a boat',
232+
providerOptions: {
233+
luma: {
234+
modify_image_ref: {
235+
url: 'https://example.com/image.jpg',
236+
weight: 1.0,
237+
},
238+
},
239+
},
240+
});
241+
```
242+
243+
For more details about Luma's capabilities and features, visit the [Luma Image Generation documentation](https://docs.lumalabs.ai/docs/image-generation).

examples/ai-core/package.json

+1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
"@ai-sdk/google": "1.1.2",
1515
"@ai-sdk/google-vertex": "2.1.2",
1616
"@ai-sdk/groq": "1.1.2",
17+
"@ai-sdk/luma": "0.0.0",
1718
"@ai-sdk/mistral": "1.1.2",
1819
"@ai-sdk/openai": "1.1.2",
1920
"@ai-sdk/openai-compatible": "0.1.3",

examples/ai-core/src/e2e/feature-test-suite.ts

+19
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,7 @@ export interface ModelVariants {
7777
invalidModel?: LanguageModelV1;
7878
languageModels?: ModelWithCapabilities<LanguageModelV1>[];
7979
embeddingModels?: ModelWithCapabilities<EmbeddingModelV1<string>>[];
80+
invalidImageModel?: ImageModelV1;
8081
imageModels?: ModelWithCapabilities<ImageModelV1>[];
8182
}
8283

@@ -1021,6 +1022,24 @@ export function createFeatureTestSuite({
10211022
});
10221023
}
10231024

1025+
if (models.invalidImageModel) {
1026+
describe('Image Model Error Handling:', () => {
1027+
const invalidModel = models.invalidImageModel!;
1028+
1029+
it('should throw error on generate image attempt with invalid model ID', async () => {
1030+
try {
1031+
await generateImage({
1032+
model: invalidModel,
1033+
prompt: 'This should fail',
1034+
});
1035+
} catch (error) {
1036+
expect(error).toBeInstanceOf(APICallError);
1037+
errorValidator(error as APICallError);
1038+
}
1039+
});
1040+
});
1041+
}
1042+
10241043
if (models.embeddingModels && models.embeddingModels.length > 0) {
10251044
describe.each(createModelObjects(models.embeddingModels))(
10261045
'Embedding Model: $modelId',

examples/ai-core/src/e2e/luma.test.ts

+27
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
import { expect } from 'vitest';
2+
import { luma as provider, LumaErrorData } from '@ai-sdk/luma';
3+
import { APICallError } from '@ai-sdk/provider';
4+
import {
5+
createFeatureTestSuite,
6+
createImageModelWithCapabilities,
7+
} from './feature-test-suite';
8+
import 'dotenv/config';
9+
10+
createFeatureTestSuite({
11+
name: 'Luma',
12+
models: {
13+
invalidImageModel: provider.image('no-such-model'),
14+
imageModels: [
15+
createImageModelWithCapabilities(provider.image('photon-flash-1')),
16+
createImageModelWithCapabilities(provider.image('photon-1')),
17+
],
18+
},
19+
timeout: 30000,
20+
customAssertions: {
21+
errorValidator: (error: APICallError) => {
22+
expect((error.data as LumaErrorData).detail[0].msg).toMatch(
23+
/Input should be/i,
24+
);
25+
},
26+
},
27+
})();
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
import { luma } from '@ai-sdk/luma';
2+
import { experimental_generateImage as generateImage } from 'ai';
3+
import 'dotenv/config';
4+
import fs from 'fs';
5+
6+
async function main() {
7+
const result = await generateImage({
8+
model: luma.image('photon-flash-1'),
9+
prompt: 'A woman with a cat riding a broomstick in a forest',
10+
aspectRatio: '1:1',
11+
providerOptions: {
12+
luma: {
13+
// https://docs.lumalabs.ai/docs/image-generation#character-reference
14+
character_ref: {
15+
identity0: {
16+
images: [
17+
'https://hebbkx1anhila5yf.public.blob.vercel-storage.com/future-me-8hcBWcZOkbE53q3gshhEm16S87qDpF.jpeg',
18+
],
19+
},
20+
},
21+
},
22+
},
23+
});
24+
25+
for (const [index, image] of result.images.entries()) {
26+
const filename = `image-${Date.now()}-${index}.png`;
27+
fs.writeFileSync(filename, image.uint8Array);
28+
console.log(`Image saved to ${filename}`);
29+
}
30+
}
31+
32+
main().catch(console.error);

0 commit comments

Comments
 (0)