侧边栏壁纸
  • 累计撰写 53 篇文章
  • 累计创建 12 个标签
  • 累计收到 8 条评论

目 录CONTENT

文章目录

Vue+TS下的大文件分片

Kirito
2024-05-14 / 0 评论 / 0 点赞 / 27 阅读 / 4955 字 / 正在检测是否收录...

在大文件上传场景中,由前端分片切割后,分别上传,防止受到服务器限制、网络影响等情况。

示例页面

分片

// 单片类型
interface iChunk {
  index: number
  hash: string
  chunk: string
}

const CHUNK_SIZE = 1024 * 1024 * 10 // 10MB

// 文件分片
const handleFileChunk = async (file: File) => {
  const result: iChunk[] = []
  const chunkCount = Math.ceil(file.size / CHUNK_SIZE)
  for (let i = 0; i < chunkCount; i++) {
    const chunk = await createFileChunk(file, i, CHUNK_SIZE)
    result.push(chunk)
  }
  chunkList.value = result
}

// 创建分片
const createFileChunk = (file: File, index: number, size: number): Promise<iChunk> => {
  return new Promise((resolve, reject) => {
    const start = index * size
    const end = Math.min(start + size, file.size)
    const chunk = file.slice(start, end)
    const reader = new FileReader()
    const spark = new SparkMD5.ArrayBuffer()
    reader.onload = (e) => {
      spark.append(e.target?.result as ArrayBuffer)
      resolve({ index, hash: spark.end(), chunk: e.target?.result as string })
    }
    reader.onerror = (error) => {
      reject(error)
    }
    reader.readAsArrayBuffer(chunk)
  })
}

Web Worker优化

由于获取分片md5时对于性能消耗较大,可能造成页面响应慢、卡顿等问题,使用Web Worker对其进行优化。

在Vite项目中,对于Worker的引入和调用有特殊的方法如下(详见地址)

import Worker from './worker.ts?worker'

/* index.vue */
import Worker from './worker.ts?worker'

const CHUNK_SIZE = 1024 * 1024 * 10 // 10MB
const THREAD_COUNT = 4 // 并发数

// 文件分片
const handleFileChunk = async (file: File) => {
  return new Promise((resolve, reject) => {
    const result: iChunk[] = []
    const chunkCount = Math.ceil(file.size / CHUNK_SIZE)
    const workerChunkCount = Math.ceil(chunkCount / THREAD_COUNT)
    let finishCount = 0
    for (let i = 0; i < THREAD_COUNT; i++) {
      const worker = new Worker()
      const startIndex = i * workerChunkCount
      const endIndex =
        startIndex + workerChunkCount > chunkCount ? chunkCount : startIndex + workerChunkCount
      worker.postMessage({
        file,
        CHUNK_SIZE,
        startIndex,
        endIndex
      })
      // Worker成功回调
      worker.onmessage = (e) => {
        for (let i = startIndex; i < endIndex; i++) {
          result[i] = e.data[i - startIndex]
        }
        worker.terminate()
        finishCount++
        if (finishCount === THREAD_COUNT) {
          chunkList.value = result
          resolve(result)
        }
      }
      // Worker失败回调
      worker.onerror = (error) => {
        reject(error)
      }
    }
  })
}
/* worker.ts */
import SparkMD5 from 'spark-md5'
import type { iChunk } from './interfaces'

onmessage = async (e) => {
  const promises = []
  const { file, CHUNK_SIZE, startIndex, endIndex } = e.data
  for (let i = startIndex; i < endIndex; i++) {
    promises.push(createFileChunk(file, i, CHUNK_SIZE))
  }
  const chunks = await Promise.all(promises)
  postMessage(chunks)
}

// 创建分片
const createFileChunk = (file: File, index: number, size: number): Promise<iChunk> => {
  return new Promise((resolve, reject) => {
    const start = index * size
    const end = Math.min(start + size, file.size)
    const chunk = file.slice(start, end)
    const reader = new FileReader()
    const spark = new SparkMD5.ArrayBuffer()
    reader.onload = (e) => {
      spark.append(e.target?.result as ArrayBuffer)
      resolve({ index, hash: spark.end(), chunk: e.target?.result as string })
    }
    reader.onerror = (error) => {
      reject(error)
    }
    reader.readAsArrayBuffer(chunk)
  })
}
0

评论区