PrismJS


Canvas and video analysis for the web.

See it in action:

https://prismjs.com/demo.html

Our API documentation is available at:

https://prismjs.com/api/index.html

PrismJS is a JavaScript library for reading, analyzing, and writing video and canvas. By using PrismJS you will be able to build web applications for analyzing video and canvas content.

It is completely open-source and hosted on Github.

If you need help or have any questions, please contact the PrismJS team at support@prismjs.com.

PrismJS is a collection of tools for analyzing web-based content such as Canvas, Video and WebGL elements. The library provides a set of classes for different web-based content types that are used to parse, analyze and extract data from the source code of the page.

The library can be used either in a browser environment or in NodeJS.

PrismJS is a Web application for complex computer vision and multimedia analysis. It supports a variety of input media including video, camera, microphone, and canvas. PrismJS provides a modular framework to construct arbitrary analysis pipelines consisting of multiple passes implemented in JavaScript. The results of each pass are made available to subsequent passes and to the user interface.

We have used PrismJS to build applications for motion capture, object recognition, facial expression recognition, and interactive art. PrismJS runs entirely on the client’s browser and requires no server-side infrastructure. It uses WebGL for accelerated computing with the GPU and Canvas2D for visualizing results and interacting with users.

Prism is a new open source project from the MIT Media Lab. It enables you to perform

low level video and audio analysis directly in the browser. We are excited to announce

this technology preview to explore how developers can utilize machine learning and

computer vision algorithms on the web.

Prism currently supports two main features:

– **Video Segmentation**: Prism can analyze a video and automatically segment it into different segments based on visual content. By default, these segments are shot boundaries, however you can also configure Prism to detect faces, text, etc. You can learn more about this feature [in this paper](https://dspace.mit.edu/handle/1721.1/95514).

– **Canvas Tracking**: Prism enables you to track canvas elements across multiple frames of a video (i.e., detect and track objects in a video). By default, this feature uses color histogram similarity, but you can also pass your own custom function to compare two canvases.

Prism is a library for making sense of live video and audio on the web. It takes in raw pixel data from canvases or videos, analyzes it, and then makes that data available to other libraries or applications.

The primary use case is real-time analysis of video and audio streams from the user’s webcam. For example, Prism can tell you if there is a face in the view, when someone moves their arm, or if there was any motion at all over the last few seconds.


Leave a Reply

Your email address will not be published.