Show HN: 처음부터 구축한 ML 프레임워크에서 12M 변환기를 교육했습니다.

hackernews | | 📰 뉴스
#하드웨어/반도체
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Rust 기반의 네이티브 백엔드를 탑재하여 CPU, CUDA, WebGPU 환경에서 고성능을 제공하는 TypeScript 머신러닝 프레임워크가 소개되었습니다. 자동 미분 기능과 PyTorch와 유사한 API를 통해 텐서 연산, 신경망 모듈, 옵티마이저 등을 포괄적으로 지원하며 1,200만 개의 파라미터를 가진 트랜스포머 모델 학습에 성공했습니다.

본문

A TypeScript ML framework with Rust native backends (CPU, CUDA, WebGPU) providing autograd, tensor operations, and neural network training at GPU speed. - Automatic differentiation -- full backward pass through an autograd tape - GPU acceleration -- CUDA (NVIDIA) and WebGPU (Metal/Vulkan/DX12) backends - PyTorch-like API -- familiar Tensor ,Module ,Parameter , optimizer classes - Comprehensive ops -- elementwise, matmul, conv1d/conv2d, pooling, reductions, activations - Built-in modules -- Linear ,Conv1d ,Conv2d ,Embedding ,ReLU ,Sigmoid ,Tanh - Optimizers -- SGD andAdam (AdamW) with learning rate scheduling npm install @mni-ml/framework import { Tensor, Linear, Adam, Parameter, softmax, crossEntropyLoss } from '@mni-ml/framework'; // Create tensors const x = Tensor.rand([32, 10]); const targets = [[0], [1], [2], /* ... */]; // Build a model const layer1 = new Linear(10, 64); const layer2 = new Linear(64, 3); // Forward pass let h = layer1.forward(x).relu(); let logits = layer2.forward(h); let loss = crossEntropyLoss(logits, targets); // Backward pass loss.backward(); // Optimize const params = [...layer1.parameters(), ...layer2.parameters()]; const optimizer = new Adam(params, 0.001); optimizer.step(); optimizer.zeroGrad(); // Creation Tensor.zeros([2, 3]) // zero-filled Tensor.ones([2, 3]) // one-filled Tensor.rand([2, 3]) // uniform [0, 1) Tensor.randn([2, 3]) // normal distribution Tensor.fromFloat32(data, shape) // from Float32Array // Arithmetic (with autograd) a.add(b) a.add(2.0) // addition a.sub(b) // subtraction a.mul(b) a.mul(2.0) // multiplication a.div(b) a.div(2.0) // division a.neg() // negation a.exp() a.log() // exponentials a.pow(2) // power // Activations a.relu() a.sigmoid() // Reductions a.sum(dim) a.sum() // sum along dim or all a.mean(dim) a.mean() // mean along dim or all a.max(dim) // max along dim // Comparisons (returns 0/1 tensor, no gradient) a.lt(b) a.gt(b) a.eq(b) a.isClose(b, tol) // Layout a.view(2, 3) // reshape a.permute(1, 0) // transpose a.contiguous() // ensure contiguous memory // Linear algebra a.matmul(b) // matrix multiplication // Convolution a.conv1d(weight, stride, padding) a.conv2d(weight, stride, padding) // Utilities a.clone() a.detach() // copy / detach from graph a.toString() // debug string a.backward() // run backward pass a.setRequiresGrad(true) // enable gradient tracking import { Linear, Conv1d, Conv2d, ReLU, Sigmoid, Embedding } from '@mni-ml/framework'; const linear = new Linear(inputSize, outputSize); const conv1d = new Conv1d(inChannels, outChannels, kernelSize, stride, padding); const conv2d = new Conv2d(inChannels, outChannels, kernelSize, stride, padding); const embedding = new Embedding(vocabSize, embeddingDim); // Use in forward pass const out = linear.forward(input); import { softmax, gelu, layerNorm, crossEntropyLoss, dropout, avgpool2d, maxpool2d, tile } from '@mni-ml/framework'; const sm = softmax(logits, dim); const g = gelu(x); const ln = layerNorm(x, gamma, beta, eps); const loss = crossEntropyLoss(logits, targets); const dropped = dropout(x, rate, training); const pooled = avgpool2d(x, kernelH, kernelW); const maxPooled = maxpool2d(x, kernelH, kernelW); const tiled = tile(x, [2, 1]); import { Adam, SGD } from '@mni-ml/framework'; const optimizer = new Adam(parameters, lr, beta1, beta2, eps, weightDecay); // or const optimizer = new SGD(parameters, lr); optimizer.step(); // update parameters optimizer.zeroGrad(); // clear gradients TypeScript API (tensor.ts, nn.ts, optimizer.ts) │ └─→ N-API Bridge (lib.rs) │ ├─→ CPU Backend (Vec, pure Rust) ├─→ CUDA Backend (cudarc + .cu kernels) └─→ WebGPU Backend (wgpu + .wgsl shaders) All three backends share the same autograd tape and tensor store. Feature flags are mutually exclusive at compile time: cpu -- default, no GPU requiredcuda -- NVIDIA GPU via CUDAwebgpu -- any GPU via wgpu (Metal, Vulkan, DX12) Only needed if you are contributing or want a custom build. Requires Rust. # CPU (default) npm run build:native # CUDA (requires CUDA toolkit) npm run build:native:cuda # WebGPU npm run build:native:webgpu # Build TypeScript npm run build MIT

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →