The topics in this section describe how a client driver must configure their device.
About CCV - Community Core Vision Community Core Vision, CCV for short is a open source/cross-platform solution for blob tracking with computer vision. It takes an video input stream and outputs tracking data (e.g. Coordinates and blob size) and events (e.g. Finger down, moved and released) that are used in building multi-touch applications.
A USB device exposes its capabilities in the form of a series of interfaces called a USB configuration. Each interface consists of one or more alternate settings, and each alternate setting is made up of a set of endpoints. The device must provide at least one configuration, but it can provide multiple configurations that are mutually exclusive definitions of what the device can do. For more information about configuration descriptors, see USB Configuration Descriptors.
Device configuration refers to the tasks that the client driver performs to select a USB configuration and an alternate interface in each interface. Before sending I/O requests to the device, a client driver must read the device's configuration, parse the information, and select an appropriate configuration. The client driver must select at least one of the supported configurations in order to make the device to work.
A WDM-based client driver can select any of the configurations in a USB device.
If your client driver is based on Kernel-Mode Driver Framework or User-Mode Driver Framework, you should use the respective framework interfaces for configuring a USB device. If you are using the USB templates that are provided with Microsoft Visual Studio Professional 2012, the template code selects the first configuration and the default alternate setting in each interface.
Topic | Description |
---|---|
In this topic, you will learn about how to select a configuration in a universal serial bus (USB) device. | |
This topic describes the steps for issuing a select-interface request to activate an alternate setting in a USB interface. The client driver must issue this request after selecting a USB configuration. Selecting a configuration, by default, also activates the first alternate setting in each interface in that configuration. | |
This topic provides information about registry settings that configure the way Usbccgp.sys selects a USB configuration. The topic also describes how Usbccgp.sys handles select-configuration requests sent by a client driver that controls one of functions of a composite device. |
For information about special considerations related to the configuration of devices that require firmware downloads, see Configuring USB Devices that Require Firmware Downloads.
Certain restrictions apply if a client driver is using WDF objects or whether the device has a single interface or multiple interfaces. Consider the following restrictions before changing the default configuration:
USB Driver Development Guide
USB Configuration Descriptors
Working with USB Devices
Working with USB Interfaces in UMDF
Published under an open-source LGPL license on Github to allow for others to contribute features and forks.
Supports a wide range of camera devices with the ability to stitch and switch in between sensors on the fly.
Built on C++ for real-time performance and stability using on common frameworks including OpenCV and openFrameworks.
Community Core Vision, CCV for short is a open source/cross-platform solution for blob tracking with computer vision. It takes an video input stream and outputs tracking data (e.g. coordinates and blob size) and events (e.g. finger down, moved and released) that are used in building multi-touch applications. CCV can interface with various web cameras and video devices as well as connect to various TUIO/OSC/XML enabled applications and supports many multi-touch lighting techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future vision applications (custom modules/filters). This project is developed and maintained by the NUI Group Community.
CCV is released under the commercial
friendly LGPL License
Established in 2006, The Natural User Interface Group is an open source community that creates and shares interaction techniques & standards that benefit designers & developers throughout the world. We offer a collaborative environment for scientists that are interested in learning and developing modern Human/Computer Interaction methods and concepts. Our research includes topics such as: computer vision, touch computing, voice & gesture recognition, experience design and information visualization. Our mission is to openly discover, document and distribute NUI knowledge. Getting started is simple; register as a member and learn more about us...
1. Source image - Displays the raw video image from either camera or video file.
2. Use Camera Toggle - Sets the input source to camera and grabs frames from selected camera.
3. Use Video Toggle - Sets the input source to video and grabs frames from video file.
4. Previous Camera Button - Gets the previous camera device attached to computer if more than one is attached.
5. Next Camera Button - Gets the next camera device attached to computer if more than one is attached.
6. Tracked Image - Displays the final image after image filtering that is used for blob detection and tracking.
7. Inverse - Track black blobs instead white blobs.
8. Threshold Slider - Adjusts the level of acceptable tracked pixels. The higher the value is, the bigger the blobs have to be to track.
9. Movement filtering - Adjust the level of acceptable distance (in pixels) before a movement of a blob is detected. The higher the option is, the more you have to actually move your finger for CCV to register a blob movement.
10. Min Blob Size - Adjust the level of acceptable minimum blob size. The higher the option is, the bigger a blob has to be to be assigned an ID.
11. Max Blob Size - Adjust the level of acceptable maximum blob size. The higher the option is, the bigger a blob can be before losing its ID.
12. Remove Background Button - Captures the current source image frame and uses it as the static background image to be subtracted from the current active frame. Press this button to recapture a static background image
13. Dynamic Subtract Toggle - Dynamically adjusts the background image. Turn this on if the environmental lighting changes often or false blobs keep appearing due to environmental changes. The slider will determine how fast the background will be learned..
14. Smooth Slider - Smoothes the image and filters out noise (random specs) from the image.
15. Highpass Blur Slider - Removes the blurry parts of the image and leaves the sharper brighter parts.
16. Highpass Noise - Filters out the noise (random specs) from the image after applying Highpass Blur.
17. Amplify Slider - Brightens weak pixels. If blobs are weak, this can be used to make them stronger.
18. On/Off Toggle - Used on each filters, this is used to turn each filter on or off.
19. Camera Settings Button - Opens the camera settings. This will open controls of the camera, especially when using a PS3 Eye camera.
20. Flip Vertical Toggle - Flips the source image vertically.
21. Flip Horizontal Toggle - Flips the source image horizontally.
22. GPU Mode Toggle - Turns on experimental hardware acceleration and uses the GPU. This is best used on newer graphics cards only.
23. Send UDP Toggle - Turns on the sending of TUIO messages.
24. Flash XML - Turns on the sending of Flash XML messages (no need for flosc anymore).
25. Binary TCP - Turns on the sending of RAW messages (x,y coordinates).
26. Enter Calibration - Loads the calibration screen.
27. Save Settings - Saves all the current settings into the XML settings file.
In order to calibrate CCV for your camera and projector, you'll need to run the calibration process. Calibrating allows touch points to line up with elements on screen. This way, when touching something displayed on screen, the touch is registered in the correct place. In order to do this, CCV has to translate camera space into screen space; this is done by touching individual calibration points. Following the directions below will help explain how to setup and perform calibration.
Read the full calibration guide here.
Platform | Title | File |
---|---|---|
Linux Ubuntu 11.04+ (32/64) | Package for Ubuntu Running the Demo on Windows 1. Download the latest binary distribution for Windows (highlighted in green). 2. Extract the archive by right-clicking it and selecting 'Extract All...'. 3. The extracted folder should open automatically. Double-click 'demo.bat' to get started. If you're unsure what to try or what's possible in this simple demo, watch the video on the front page for an example. | Download Now |
Mac OS X 10.6+ | Package for Mac OS X Running the Demo on Mac OS X 1. Download the latest binary distribution for OS X (highlighted in green). 2. If you're using Safari the archive should be extracted for you. If you're using another browser, double-click the downloaded archive to extract it. 3. In the extracted folder, double-click 'demo.command' or the 'demo' file to get started. If you're unsure what to try or what's possible in this simple demo, watch the video on the front page for an example. | Download Now |
Windows 7/8 (32/64 bits) | Package for Windows Running the Demo on Windows 1. Download the latest binary distribution for Windows (highlighted in green). 2. Extract the archive by right-clicking it and selecting 'Extract All...'. 3. The extracted folder should open automatically. Double-click 'demo.bat' to get started. If you're unsure what to try or what's possible in this simple demo, watch the video on the front page for an example. | Download Now |
C++ Cross Platform Source (VS2008/GCC) | Latest stable source on Github | Download Now |
Platform | Title | File |
---|---|---|
Windows 7/8 (32/64 bits) | Installer with Multi-camera/Fiducial Tracking | Download Now |
C++ Windows Source (VS2008) | Latest stable source on Github | Download Now |
Platform | Title | File |
---|---|---|
C++ Windows Source (VS2010) | Latest developer preview on Github | Download Now |
File | Date | Size | D/L | MD5 | |
---|---|---|---|---|---|
Version 1.5.0 (Windows - Multicam Official) | |||||
CCV-1.5.exe | 10/29/2011 08:16 pm | 15.2 MB | 31417 | 6fcc86fbbcedf362433ca3a8f0ea35a8 | |
Version 1.4.1 (Windows - Multicam) | |||||
CCV-1.4.1a-win-bin-preview.zip | 06/03/2011 11:02 pm | 11.6 MB | 49626 | bc8fdddaf0b157072d59cc7ab7bc617f | |
Version 1.4 (Windows - Preview) | |||||
CCV-1.4.0-win-bin-preview.zip | 09/11/2010 09:36 pm | 10.3 MB | 26241 | 6c6a16b362d8827b55c69c0fa562863f | |
Version 1.3 (Windows - Stable) | |||||
CCV-1.3-win-bin.zip | 02/10/2010 06:59 pm | 9.8 MB | 365365 | e61a5102779990e946379ae8c427dc28 | |
CCV-1.3-win-installer.exe | 02/10/2010 06:47 pm | 7 MB | 42615 | b1b844d50b584e76cd7ea09875303014 | |
CCV-1.3-win-src-r195.zip | 10/19/2009 05:14 am | 58.6 MB | 65188 | d9de71a3cb07d69c68a129a0a210021d | |
Version 1.2 (Cross Platform - Stable) | |||||
CCV-1.2-lin-32-bin.tar.gz | 05/05/2009 11:53 pm | 7.8 MB | 46146 | 97e4d550522ec44139258ce1269691db | |
CCV-1.2-lin-64-bin.tar.gz | 05/05/2009 11:51 pm | 6.9 MB | 4313 | 42680ea28a46ae6bdcc03ea1093437c1 | |
CCV-1.2-mac-bin.zip | 05/06/2009 12:21 am | 12.7 MB | 54685 | 0718d80542fed0664ed72be72cc88672 | |
CCV-1.2-win-bin.zip | 05/06/2009 6:33 pm | 12.1 MB | 114077 | c784d6f5bb051444a284f1ffa2ae4521 | |
Version 1.1 (The Beta Release) | |||||
tbeta-1.1-lin-bin.tar.gz | 04/28/2009 01:00 pm | 25.5 MB | 3139 | a7ec7aa5ab6188644ff487279cfec045 | |
tbeta-1.1-mac-bin.zip | 04/28/2009 12:44 pm | 17.1 MB | 3034 | bd7e8444b575a941622805f75455d8c2 | |
tbeta-1.1-win-bin.zip | 04/28/2009 12:30 pm | 6.9 MB | 18595 | 72dd483dc655d0ac9bcaa1322ab27d43 | |
tbeta-1.1.1-win-ps3.zip | 05/06/2009 05:15 am | 4.5 MB | 2954 | 85a4e565d83addd33704e55249ef85e3 | |
Total Downloads | 1100000+ |
CCV Physics
Physics based Tabletop interactionCCV Multicamera Support
Showcasing the new Drag & Drop multicamera support in CCV 1.5.CCF Alpha
Demonstrating the 'put that there' concept with the new fusion engine.CCV Finger Tracking
Finger tracking using Kinect Core Vision and peak detection.CCV Learning Shapes
Community Core Vision 1.5 Shapes Learning and DetectionKinect Core Vision
Kinect + CCV = Awesome demos of Kinect being used for CV.CCV Fog Tracking
Project offers ability to see and interact with a static and animated 2D and 3D images.Full Emulation of the Microsoft Surface 2.0 Hardware on DSI table
In this video interaction between CCV 1.5 with Microsoft Surface 2.0 SDK is shownCCV 1.5 Release Video
Release video of the latest version of Community Core Vision 1.5 - GSoC 2011 projectCCV Kinect
'Kinect Core Vision' showcasing Depth Tracking and interaction.CCV 1.5 Features
The new multi camera stitching ability is fully integrated and easy to take 2 or more cameras and stitch them into a single image.CCV Robot Tracking
Top Down Tracking of Robots using CCV and C++ Fluid Client.CCV MT4J Client
A multi-camera version of Community Core Vision (CCV) 1.4 in Java.CCV Mouseless
A great video from MIT Media Lab using CV to replace the mouse.CCV Multiplexer Suite
A multipexer to bundle the data from 8 different CCV instances.CCV Fiberboard
Using fiber optics to replace typical optics to remove size needed.CCV AS3 2D Client
Example of sorting and group of notes on LLP screen in Flash Player 9CCV PyMT Client
A LLP setup using 8 infrared lasers, 2 PS3 Eye cameras in Python.CCV Sphere
A low-cost spherical multitouch display with google maps, photos and videos demos.CCV Hand Tracking
Basic Hand Tracking with CCV 1.3.CCV Camshift Tracking
Tracking of hand orientation using hand tracking module.CCV Whiteboard
Cheap DIY whiteboard demo reel supports annotating and storing that to an image file.CCV Pseudo Hands
Adaptiving UI depending on user's hand position.CCV Depth Mapping
Using FTIR and computer vision to determine depth of touch.CCV MT Storm (Subcycle)
Community member subcycle creates audio/visual remix.CCV Cheap MT Pad
How to Make a Cheap Multitouch Pad - MTmini over 2 million views!Argos Interface Builder
An application built in oF using a custom widget library to provide a drag-and-drop approach to building interfaces.ECHI Head Tracking
Using 6 degrees of freedom head tracking (GSoC 2008)Grafiti Gestures
A Gesture Recognition Framework for Interactive Tabletop Interfaces.X.org TUIO
Demonstration of X.org TUIO Input DriverShare your project or View more videos on Youtube