OpenCV中文网站

 找回密码
 立即注册
搜索
热搜: 安装 配置
查看: 18828|回复: 5

人体姿态识别——提供源码

[复制链接]
发表于 2015-7-20 16:51:11 | 显示全部楼层 |阅读模式
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE7F9.tmp.png
Computer vision application

        



Computer vision application
The directory
Report: Computer vision application1.The object of the project
The object of the project is Gesture recognition and location in the interior of people.
2.The method and the principle applied to the project2.1 Platform
The platform is based on Visual Studio 2012 and OpenCV 2.4.10.
2.2 The principle of transform the RGB image to the gray image
There are three major methods to transform the RGB image to the gray image.
The first one is called the maximum value that is set the value of R, G, and B to the maximum of these three.
Gray=R=G=B=max(R, G, B)
The second one is called mean value which is set the value of R, G, and B to the mean value of these three.
Gray=R=G=B=(R+G+B)/3
The third one is called weighted average that is giving different weights to the R, G and B according to the importance or other indicators, and then adding the three parts together. In fact, human’s eye is very high sensitive to green, then red, last blue. Gray=0.30R+0.59G+0.11B
2.3 The principle of image enhancement
Image enhancement is the process of making images more useful. There are two broad categories of image enhancement techniques. The first one is spatial domain technique, and it is a direct manipulation of image pixels that includes point processing and neighborhood operations. The second one is frequency domain technique, and it is a manipulation of Fourier transform or wavelet transform of an image.
The principle of the median filter is to replace the value of a pixel by the median of the gray levels in the neighborhood of that pixel(the original value of the pixel is included in the computation of the median). It forces the points with distinct gray levels to be more like their neighbors.
In addition, we also apply the morphological image processing after smoothing. Morphological image processing (or morphology) describes a range of image processing techniques that deal with the shape (or morphology) of features in an image. The basic ideal of Morphology is to use a special structuring element to measure or extract the corresponding shape or characteristics in the input images for further image analysis and object recognition. The mathematical foundation of morphology is the set theory. There are two basic morphological operations: erosion and dilation.
2.4 The principle of thresholding
Thresholding is particularly useful for segmentation in which we want to isolate an object of interest from a background. At the same time, thresholding segmentation is usually the first step in any segmentation approach. The blow formula is the basic principle of image segmentation. When the gray level is no bigger than the threshold, we will set the pixel value zero(black). In contrast, when the gray level is bigger than the threshold, we will set the pixel value 255(white).
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE80A.tmp.png
When it comes to the threshold, we get the value through the image histograms
2.5 The principle of classifier
The classifier is a algorithm or device that separates objects into different classes. Usually, a classifier consists of three parts. First one is the sensor, for instance, imaging device, fingerprint reader, etc...Second one is feature extractor, for example, edge detector or property descriptor. Third one is classifier which uses the extracted features for decision making or Euclidian distance, or other methods.
Features should can be regarded as the descriptors we introduced before. And the feature should be representative and useful for classification.
When it comes to the feature space, the set of all possible patterns form the feature vector. Each feature vector is a point in the so-called feature space. Similar objects yield similar measurement results. Nearby points in feature space correspond to similar objects. Distance in feature space is related to similarity. Points that belong to the same class form a cloud in feature space.
Divide the data set into a training set and a test set. The performance of a classifier should be assessed by the classification error on an independent test set. This set should not contain objects that are included in the training set. Determine a decision boundary by minimizing the classification error of the training set. Determine the classifier performance by computing the classification error of the test set.
3.The content and the result of the project3.1 The main steps in the project
Before we segment the vessel and classify the vessel, we can find these images afforded are color. So first we should load these pictures into gray images. Because we use the method of SVM (Support Vector Machines), we should divide these images into a training set and a test set. Next is image enhancement, and the next is thresholding, and then object extraction. Next is feature extraction and vessel classification. We first use the training set to train these images with representative features and then test the test set to recognize the human’s gesture.The depth information of human is obtained by the binocular stereo vision.And then obtain the position of the people and prepare for the three-dimensional reconstruction.
3.2 About human body posture recognitionAbout three kinds of methods are most common:
1.Method based on template matching.
2.Method based on Classification.
3.Prediction based approach.
The method based on template matching maybe the most accurate of the all three.
However this method will consume a lot of time. So it is not real time.The method based on Classification meets the accuracy requirements in the process of dealing with small data and Implementation ,so, in a single scene, for the time being, this method is used for the time being.About third methods:If the data processed by the computer in a complex scene, the data will be expanded in a geometric scale.Dealing with this problem is the most difficult problem in artificial intelligence.However, in recent years, the neural network based on deep learning's voice recognition and image processing has shown the advantages.
3.2.1 Foreground extraction
Moving target detection is the basis of the whole target detection and tracking system.
Based on the video, but also for further processing (such as encoding, target tracking, target classification, target behavior understanding foundation). The purpose of moving target detection is to extract the moving object (such as human, vehicle, etc.) from the video image.
Frame difference method, background subtraction method and optical flow method. Based on the three kinds of commonly used methods. There are many kinds of improvement methods, one is the inter frame difference method and the background difference method combining method and good results have been achieved, but there are still retain less incomplete object contour detection and target point problem.Using background subtraction method is better than the direct access method, access to background method and statistical average method, it is a method that through carrying on the statistics to the continuous image sequence averaged to obtain  image background.
In addition to get better background, R.T.Colin  proposed to  established a single Gauss background model.Grimson et al. Proposed an adaptive hybrid Gauss background model to obtain a more accurate background description for target detection.At the same time, in order to increase the robustness and reduce the impact of environmental changes, to update the background is important.For example based on the recursive updating of the statistical averaging method, a simple adaptive filter is used to update the background model.
In this paper, according to the algorithm proposed in KaewTraKulPong et al.[1]Zivkovic et al.[2][3], we use the update variance to the 3 parameters in the root model, and finally, the algorithm is realized by using OpenCV basic function. The main processes are as follows:
1.Firstly, the mean, variance and weight of each Gauss are set to 0.
2.The T model used in the video is used to train the GMM model. For each pixel, the establishment of its model number of the largest GMM_MAX_COMPONT Gauss GMM model. When the first pixel is set up, the initial mean, variance, and weight are set to 1.
3.The first frame of the training process, when back to the pixel value, compared with the previous Gaussian mean, if the pixel value and mean value model in three times the variance of the difference, the task is the Gaussian. At this point, the following equation is used to update.
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE81A.tmp.jpg
4.when training frames in T, different GMM pixel number adaptive selection. First of all, with the weight divided by the variance of the various Gauss from big to small sort, and then select the most in front of B Gauss, so that
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE81B.tmp.jpg
Where file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE82C.tmp.png is generally set to 0.3.
So that we can eliminate the noise points in the training process.
5. during the testing phase, the new pixel value is compared with every mean values of B a Gaussian, if the difference between 2 times the variance of the words, that is the background, or that the foreground. And as long as there is a Gauss component to meet the condition is considered a prospect. Foreground assignment is 255, and the background value is 0. So as to form a two value chart.
6. Due to foreground binary map contains a lot of noise, so the use of morphological opening operation noise is reduced to 0, followed by the closed reconstruction operation due to the edge portion of the opening operation loss of information. Eliminate the small noise points.
The above is the algorithm to achieve the general process, but when we are in the specific programming, there are still many details of the need to pay attention, such as the choice of some parameter values. Here, after testing some of the commonly used parameter values are declared as follows:
Among the 3 parameters of the value of the update variance, the learning rate is 0.005. That is to say T equals 200.
Define the maximum number of mixed Gauss number for each pixel 7.
Take video of the first 200 frames for training.
Take Cf 0.3. That is to meet the weight of the number is greater than 0.7 of the number of adaptive Gauss B.
During the training process, a new set of Gauss is needed, and the weight value is equal to the value of the learning rate, which is 0.005.
During the training process, the need to build a new Gauss, take the mean value of the input pixel value for the input of the Gauss. The variance was 15.
During the training process, the need to build a new Gauss, take the variance of the Gauss 15.
The following picture is a dynamic background in the training process.
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE83D.tmp.jpg
Figure 3.1 the result of foreground extraction
3.2.2 Feature extraction
After an image has been segmented into regions. Representation and description should be considered.Representation and description used to make the data useful to a computer. Representing region in 2 ways
1.In terms of its external characteristics (its boundary)focus on shape characteristics
2.In terms of its internal characteristics (its region) focus on regional properties, e.g., color, texture.Sometimes, we may need to use both ways.
Choosing a representation scheme, however is only the part of the task of making data useful to computer.The next task is to describe the region based on the chosen representation.For example:
Representation boundary escription of the length of the boundary, orientation of the straight line joining its extreme points, and the number of concavities in the boundary.
To find the feature of the target , we need to extract the contour of the target, and to extract object from the background based on the area of every contour. And having the largest area is the destination physical contour. Here we use the blow function:
(1) Find contours
findContours(image,
           contours,//轮廓的数组
           CV_RETR_EXTERNAL,//获取外轮
           CV_CHAIN_APPROX_NONE);//获取每个轮廓的每个像素
(2) Draw contours
drawContours(result,contours,
           -1,//绘制所有轮廓
           cv::Scalar(0),//颜色信息为黑色
           2);//轮廓线的绘制宽度为2
The following image is the result of contours extraction and object extraction.
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE83E.tmp.jpg
Figure 3.2 The result of extraction of contour and object
At last ,we choose the characteristics of Length of boundary and the hight of Feret box to train and predict.We also test other characteristics but not as good as these two.
3.2.3. Recognition and classification3.2.3.1 Classifier
We use the SVM (Support vector machine) classifier to recognize the ships. Support vector machines are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. There are four main steps to structure SVM:
1. A given training set T={(x1,y1), (x2,y2),…, (xn, yn)}
2. Solving two quadratic programming problem:
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE83F.tmp.jpg
We get the solution:  
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE84F.tmp.png
3. Calculate the parameters of W, and selects a positive componentfile:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE850.tmp.pngto compute b
     file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE851.tmp.png
4. To construct a decision boundary:file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE852.tmp.png, thus the have the decision function:   file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE853.tmp.png
         All the above work can be done with the OpenCv functions, but it need us to make the train file and test file for it. Training file used for learning, and with the features, we can make classification of these four vessels(testing file). In short SVM steps can be summaries as: training (learning), testing and predicting.  
         When choosing the images, we main keep this in mind: In order to train SVM effectively, we could not choose vessel image for training freely, instead we need to select the vessel shape with more obvious characteristics, and can be representative for vessel image type. If the vessel shape too special or similar would interfere the SVM learning. Because too diverse the sample is, it will increases the difference between feature vectors, reduces the classification of objects. As a result, it increase the burden of SVM learning.
Some main SVM codes used:
// The parameters of support vector machine settings
    CvSVMParams params;  
params.svm_type    = CvSVM::C_SVC; //SVM type: C using support vector machine
params.kernel_type = CvSVM:INEAR;// The type of kernel function: linear
params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6); //Termination criterion function: when the number of iterations reaches the maximum
// SVM training
CvSVM SVM;   // Establish an instance of the SVM model   
SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params); //The training model, parameters are: input data, response, ··,··, features
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE864.tmp.jpg
Figure 3.4 the result of  pattern recognition
3.2.3.2 Recognition results
Test results
Test samples
Correct
identification number
precision
550
550
100%
3.2.4. Conclusion
From the block above, we know that the method we use can distinguish the several ships, but still exist errors. Because the number of given picture not very large .When test the category of one picture ,it is inevitable to make error. And some categories of pictures also have some same points, which the feature is similar, hence it’s hard to distinguish them. What’ more the classifier SVM also exist some errors, it could not classify exactly. It also can be the features we chose are not enough, so some more work should be done.
3.3.Stereo vision
3.3.1 Stereopsis
Fusing the pictures recorded by our two eyes and exploiting the difference (or disparity) between them allows us to gain a strong sense of depth. This chapter is
concerned with the design and implementation of algorithms that mimic our ability
to perform this task, known as stereopsis. Reliable computer programs for stereoscopic perception are of course invaluable in visual robot navigation (Figure 7.1),
cartography, aerial reconnaissance, and close-range photogrammetry. They are also
of great interest in tasks such as image segmentation for object recognition or the
construction of three-dimensional scene models for computer graphics applications.
        file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE865.tmp.jpg
figure 3.4: Left: The Stanford cart sports a single camera moving in discrete increments
along a straight line and providing multiple snapshots of outdoor scenes. Center: The
INRIA mobile robot uses three cameras to map its environment. Right: The NYU
mobile robot uses two stereo cameras, each capable of delivering an image pair. As shown by these examples, although two eyes are sufficient for stereo fusion, mobile robots are sometimes equipped with three (or more) cameras. The bulk of this chapter is concerned with binocular perception but stereo algorithms using multiple cameras are discussed [4]. Photos courtesy of Hans Moravec, Olivier Faugeras, and Yann LeCun.
Stereo vision involves two processes: The fusion of features observed by two(or more) eyes and the reconstruction of their three-dimensional preimage. The latter is relatively simple: The preimage of matching points can (in principle) be found at the intersection of the rays passing through these points and the associated pupil centers (or pinholes; see Figure 3.5, left). Thus, when a single image feature is observed at any given time, stereo vision is easy. However, each picture typically consists of millions of pixels, with tens of thousands of image features such as edge elements, and some method must be devised to establish the correct correspondences and avoid erroneous depth measurements (Figure 3.5, right).
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE866.tmp.jpg
Figure  7.2: The binocular fusion problem: In the simple case of the diagram shown on the left, there is no ambiguity, and stereo reconstruction is a simple matter. In the more usual case shown on the right, any of the four points in the left picture may, a priori, match any of the four points in the right one. Only four of these correspondences are correct; the other ones yield the incorrect reconstructions shown as small gray discs.
     However, camera calibration can eliminate the distortion.To obtain more accurate depth information[5].
file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE876.tmp.jpg
Figure 3.4 the result of calibration of cameras. And the distortion is eliminated.
The camera parameters are as follows:
extrinsics:1.0
R: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 9.9990360000625755e-001, 9.7790647772508701e-003,
       -9.8570069802389540e-003, -9.8969610939301841e-003,
       9.9987921323260354e-001, -1.1983701700849161e-002,
       9.7386269890257296e-003, 1.2080100886666228e-002,
T: !!opencv-matrix
   rows: 3
   cols: 1
   dt: d
   data: [ 3.4075702905319170e+000, 1.1739005828568252e-003,
R1: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 9.9940336523117301e-001, 9.8399020411941429e-003,
       -3.3107248336674097e-002, -1.0040790862200610e-002,
       9.9993214239488748e-001, -5.9070402429666716e-003,
       3.3046877060745931e-002, 6.2359388539483330e-003,
R2: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 9.9972958617043228e-001, 3.4440467660098648e-004,
       -2.3251578890794586e-002, -2.0319599978103648e-004,
       9.9998152534851392e-001, 6.0751685610454320e-003,
       2.3253241642441649e-002, -6.0688011236303754e-003,
P1: !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [ 8.9095402067418593e+002, 0., 3.2619792175292969e+002, 0., 0.,
       8.9095402067418593e+002, 2.1098579597473145e+002, 0., 0., 0., 1.,
P2: !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [ 8.9095402067418593e+002, 0., 3.2619792175292969e+002,
       3.0368096464054670e+003, 0., 8.9095402067418593e+002,
Q: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [ 1., 0., 0., -3.2619792175292969e+002, 0., 1., 0.,
       -2.1098579597473145e+002, 0., 0., 0., 8.9095402067418593e+002, 0.,
intrinsics:1.0
M1: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.1136300108848973e+003, 0., 3.0020338800373816e+002, 0.,
D1: !!opencv-matrix
   rows: 1
   cols: 8
   dt: d
   data: [ 1.0442196198936304e-001, -2.3958410365610397e-001, 0., 0., 0.,
M2: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.1136300108848973e+003, 0., 3.0559143946713289e+002, 0.,
D2: !!opencv-matrix
   rows: 1
   cols: 8
   dt: d
   data: [ -1.8888461100187631e-001, 5.8249894498215049e+000, 0., 0., 0.,
Then we can get more accurate depth information:
        file:///C:\Users\lenovo\AppData\Local\Temp\ksohtml\wpsE887.tmp.jpg
Figure 3.5 Depth information obtained after elimination of distortion.
After correction of the binocular camera,the error within 10 meters is within 4 cm.
4.Reference
1. KaewTraKulPong, P. and R. Bowden (2001). An improved adaptive background mixture model for real-time tracking with shadow detection.
2. Zivkovic, Z. and F. van der Heijden (2004). “Recursive unsupervised learning of finite mixture models.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 26(5): 651-656.
3. Zivkovic, Z. and F. van der Heijden (2006). “Efficient adaptive density estimation per image pixel for the task of background subtraction.” Pattern recognition letters 27(7): 773-780.
4.COMPUTER VISION A MODERN APPROACH second edition avid A. Forsyth
University of Illinois at Urbana-Champaign Jean Ponce Ecole Normale Supérieure:3-22
5.  Learning OpenCV: Computer Vision with the OpenCV Library by Gary Bradski and Adrian Kaehler Published by O'Reilly Media, October 3, 2008
// test.cpp : 定义控制台应用程序的入口点。
//

#include "stdafx.h"
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/video/tracking.hpp>
#include <iostream>
#include <stdio.h>
#include <fstream>
#include <iomanip>
#include <sstream>  
#include <string>
#include <opencv2/opencv.hpp>
#define train_1 1;//train等于1进行SVM训练
#define predict_1 1;//predict等于1进行SVM预测
using namespace std;  
using namespace cv;
int SVM_train()
{
    // 用于保存可视化数据的矩阵
    int width = 512, height = 512;
    Mat image = Mat::zeros(height, width, CV_8UC3);

    // 创建一些训练样本
    float labels[4] = {1.0, -1.0, -1.0, -1.0};
    Mat labelsMat(3, 1, CV_32FC1, labels);

    float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
    Mat trainingDataMat(3, 2, CV_32FC1, trainingData);

    // 设置SVM参数
    CvSVMParams params;
    params.svm_type    = CvSVM::C_SVC;
    params.kernel_type = CvSVM::RBF;
    params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

    // 对SVM进行训练
    CvSVM SVM;
    SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
    SVM.save("SVM_Data.xml");
    Vec3b green(0,255,0), blue (255,0,0);
    // 将SVM断定的分划区域绘制出来
    for (int i = 0; i < image.rows; ++i)
        for (int j = 0; j < image.cols; ++j)
        {
            Mat sampleMat = (Mat_<float>(1,2) << i,j);
            float response = SVM.predict(sampleMat);

            if (response == 1)
                image.at<Vec3b>(j, i)  = green;
            else if (response == -1)
                image.at<Vec3b>(j, i)  = blue;
        }

    // 绘制训练数据点
    int thickness = -1;
    int lineType = 8;
    circle( image, Point(501,  10), 5, Scalar(  0,   0,   0), thickness, lineType);
    circle( image, Point(255,  10), 5, Scalar(255, 255, 255), thickness, lineType);
    circle( image, Point(501, 255), 5, Scalar(255, 255, 255), thickness, lineType);
    circle( image, Point( 10, 501), 5, Scalar(255, 255, 255), thickness, lineType);

    // 绘制支持向量
    thickness = 2;
    lineType  = 8;
    int c     = SVM.get_support_vector_count();

    for (int i = 0; i < c; ++i)
    {
        const float* v = SVM.get_support_vector(i);
        circle( image,  Point( (int) v[0], (int) v[1]),   6,  Scalar(128, 128, 128), thickness, lineType);
    }

    imwrite("result.png", image);      

    imshow("简单SVM分类", image);
    waitKey(0);
        return 0;
}

//写入:质心坐标、面积、周长、高度、宽度
void write(double mom0,double mom1,
        double area,double length,double height,double width)
{
        char *path = "C:\\Users\\Lenovo\\Desktop\\io\\2.txt"; // 你要创建文件的路径
        ofstream fout(path);
        if (fout)
        {
                fout<<mom0<<" "<<mom1<<" "<<area<<" "<<length <<" "<<height<<" "<<width<< endl; // 使用与cout同样的方式进行写入
                fout.close();
        }
}
class FrameProcessor;
class FrameProcessor{  
public:  
        virtual void process(cv::Mat &input,cv::Mat &ouput)=0;  
};  

//计算当前图像与背景的差异
class BGFGSegmentor:public FrameProcessor{
        cv::Mat gray;//当前灰度图像
        cv::Mat background;//累积的背景
        cv::Mat backImage;//背景图像
        cv::Mat foreground;//前景图像
        //背景累加中的学习率
        double leaningRate;
        int threshold;//前景提取的阈值
public:
        BGFGSegmentor():threshold(10),leaningRate(0.01){}
        //处理方法
        void process(cv::Mat & frame,cv::Mat &output){
                //转换为灰度图像
                cv::cvtColor(frame,gray,CV_BGR2GRAY);
                //初始化背景为第一帧
                if(background.empty())
                        gray.convertTo(background,CV_32F);
                //转换背景图像为8U格式
                background.convertTo(backImage,CV_8U);
                //计算差值
                cv::absdiff(backImage,gray,foreground);
                //应用阈值化到前景图像
                cv::threshold(foreground,output,
                        threshold,255,cv::THRESH_BINARY_INV);
                //对背景累加
                cv::accumulateWeighted(gray,background,leaningRate,output);
        }
};
class VideoProcessor{
private:
        cv::VideoCapture caputure;
        //写视频流对象
        cv::VideoWriter writer;
        //输出文件名
        cv::string Outputfile;

        int currentIndex;
        int digits;
        cv::string extension;
        FrameProcessor *frameprocessor;
        //图像处理函数指针
        void (*process)(cv::Mat &,cv::Mat &);
        bool callIt;
        cv::string WindowNameInput;
        cv::string WindowNameOutput;
        //延时
        int delay;
        long fnumber;
        //第frameToStop停止
        long frameToStop;
        //暂停标志
        bool stop;
        //图像序列作为输入视频流
        cv::vector<cv::string> images;
        //迭代器
public:
        VideoProcessor() : callIt(true),delay(0),fnumber(0),stop(false),digits(0),frameToStop(-1){}
        //设置图像处理函数
        void setFrameProcessor(void (*process)(cv::Mat &,cv::Mat &)){
                frameprocessor = 0;
                this->process = process;
                CallProcess ();
        }
        //打开视频
        bool setInput(cv::string filename){
                fnumber = 0;
                //若已打开,释放重新打开
                caputure.release ();
                return caputure.open (filename);
        }
        //设置输入视频播放窗口
        void displayInput(cv::string wn){
                WindowNameInput = wn;
                cv::namedWindow (WindowNameInput);
        }
        //设置输出视频播放窗口
        void displayOutput(cv::string wn){
                WindowNameOutput = wn;
                cv::namedWindow (WindowNameOutput);
        }
        //销毁窗口
        void dontDisplay(){
                cv::destroyWindow (WindowNameInput);
                cv::destroyWindow (WindowNameOutput);
                WindowNameInput.clear ();
                WindowNameOutput.clear ();
        }

        //启动
        void run(){
                cv::Mat frame;
                cv::Mat output;
                if(!isOpened())
                        return;
                stop = false;
                while(!isStopped()){
                        //读取下一帧
                        if(!readNextFrame(frame))
                                break;
                        if(WindowNameInput.length ()!=0)
                                imshow (WindowNameInput,frame);
                        //处理该帧
                        if(callIt){
                                if(process)
                                        process(frame,output);
                                else if(frameprocessor)
                                        frameprocessor->process (frame,output);
                        }
                        else{
                                output = frame;
                        }
                        if(Outputfile.length ()){
                                cvtColor (output,output,CV_GRAY2BGR);
                                writeNextFrame (output);
                        }
                        if(WindowNameOutput.length ()!=0)
                                imshow (WindowNameOutput,output);
                        //按键暂停,继续按键继续
                        if(delay>=0&&cv::waitKey (delay)>=0)
                                cv::waitKey(0);
                        //到达指定暂停键,退出
                        if(frameToStop>=0&&getFrameNumber()==frameToStop)
                                stopIt();
                }
        }
        //暂停键置位
        void stopIt(){
                stop = true;
        }
        //查询暂停标志位
        bool isStopped(){
                return stop;
        }
        //返回视频打开标志
        bool isOpened(){
                return  caputure.isOpened ()||!images.empty ();
        }
        //设置延时
        void setDelay(int d){
                delay = d;
        }
        //读取下一帧
        bool readNextFrame(cv::Mat &frame){
                if(images.size ()==0)
                        return caputure.read (frame);
                else{
                        if(itImg!=images.end()){
                                frame = cv::imread (*itImg);
                                itImg++;
                                return frame.data?1:0;
                        }
                        else
                                return false;
                }
        }

        void CallProcess(){
                callIt = true;
        }
        void  dontCallProcess(){
                callIt = false;
        }
        //设置停止帧
        void stopAtFrameNo(long frame){
                frameToStop = frame;
        }
        // 获得当前帧的位置
        long getFrameNumber(){
                long fnumber = static_cast<long>(caputure.get ((CV_CAP_PROP_POS_FRAMES)));
                return fnumber;
        }

        //获得帧大小
        cv::Size getFrameSize() {
                if (images.size()==0) {
                        // 从视频流获得帧大小
                        int w= static_cast<int>(caputure.get(CV_CAP_PROP_FRAME_WIDTH));
                        int h= static_cast<int>(caputure.get(CV_CAP_PROP_FRAME_HEIGHT));
                        return cv::Size(w,h);
                }
                else {
                        //从图像获得帧大小
                        cv::Mat tmp= cv::imread(images[0]);
                        return (tmp.data)?(tmp.size())cv::Size(0,0));
                }
        }

        //获取帧率
        double getFrameRate(){
                return caputure.get(CV_CAP_PROP_FPS);
        }
        cv::vector<cv::string>::const_iterator itImg;
        bool setInput (const cv::vector<cv::string> &imgs){
                fnumber = 0;
                caputure.release ();
                images = imgs;
                itImg = images.begin ();
                return true;
        }

        void  setFrameProcessor(FrameProcessor *frameprocessor){
                process = 0;
                this->frameprocessor = frameprocessor;
                CallProcess ();
        }

        //获得编码类型
        int getCodec(char codec[4]) {
                if (images.size()!=0)
                        return -1;
                union { // 数据结构4-char
                        int value;
                        char code[4];
                } returned;
                //获得编码值
                returned.value= static_cast<int>(
                        caputure.get(CV_CAP_PROP_FOURCC));
                // get the 4 characters
                codec[0]= returned.code[0];
                codec[1]= returned.code[1];
                codec[2]= returned.code[2];
                codec[3]= returned.code[3];
                return returned.value;
        }


        bool setOutput(const cv::string &filename,int codec = 0,double framerate = 0.0,bool isColor = true){
                //设置文件名
                Outputfile = filename;
                //清空扩展名
                extension.clear ();
                //设置帧率
                if(framerate ==0.0){
                        framerate = getFrameRate ();
                }
                //获取输入原视频的编码方式
                char c[4];
                if(codec==0){
                        codec = getCodec(c);
                }
                return writer.open(Outputfile,
                        codec,
                        framerate,
                        getFrameSize(),
                        isColor);
        }

        //输出视频帧到图片fileme+currentIndex.ext,如filename001.jpg
        bool setOutput (const cv::string &filename,//路径
                const cv::string &ext,//扩展名
                int numberOfDigits=3,//数字位数
                int startIndex=0 ){//起始索引
                        if(numberOfDigits<0)
                                return false;
                        Outputfile = filename;
                        extension = ext;
                        digits = numberOfDigits;
                        currentIndex = startIndex;
                        return true;
        }

        //写下一帧
        void writeNextFrame(cv::Mat &frame){
                //如果扩展名不为空,写到图片文件中
                if(extension.length ()){
                        std::stringstream ss;
                        ss<<Outputfile<<std::setfill('0')<<std::setw(digits)<<currentIndex++<<extension;
                        imwrite (ss.str (),frame);
                }
                //反之,写到视频文件中
                else{
                        writer.write (frame);
                }
        }

};
class FeatureTracker:public FrameProcessor
{
private:
        //当前灰度图像
        Mat gray;
        //之前灰度图像
        Mat gray_prev;
        //两幅图像之间的特征点 0->1
        vector<oint2f> points[2];
        //跟踪点的初始位置
        vector<oint2f> initial;
        //检测到的特征
        vector<oint2f> features;
        //需要跟踪的最大特征数目
        int max_count;
        //特征检测中的质量等级
        double qlevel;
        //两点之间的最小距离
        double minDist;
        //检测到的特征的状态
        vector<uchar> status;
        //跟踪过程中的错误
        vector<float> err;
public:
        FeatureTracker():max_count(500), qlevel(0.01), minDist(10.){}

        //是否需要添加新的特征点
        bool addNewPoints()
        {
                //如果点的数量太少
                return points[0].size() <= 10;
        }

        //检测特征点
        void detectFeaturePoints()
        {
                //检测特征
                goodFeaturesToTrack(gray,                //图像
                        features,                                                                //检测到的特征
                        max_count,                                                //特征的最大数目
                        qlevel,                                                                        //质量等级
                        minDist);                                                                //两个特征之间的最小距离
        }

        //决定哪些点应该跟踪
        bool acceptTrackedPoint(int i)
        {
                return status &&
                        //如果它移动了
                        ((abs(points[0].x - points[1].x) + abs(points[0].y - points[1].y)) > 2);
        }

        //处理当前跟踪的点
        void handleTrackedPoints(Mat &frame, Mat &output)
        {
                //遍历所有跟踪点
                for (int i = 0; i < points[1].size(); i++)
                {
                        //绘制直线与圆
                        line(output,
                                initial,                //初始位置
                                points[1],//新位置
                                Scalar(255, 255, 255));
                        circle(output, points[1], 3, Scalar(255, 255, 255), -1);
                }
        }
        void process(Mat &frame, Mat &output)
        {
                //转换为灰度图像
                cvtColor(frame, gray, CV_BGR2GRAY);
                frame.copyTo(output);
                //1.如果需要添加新的特征点
                if (addNewPoints())
                {
                        //进行检测
                        detectFeaturePoints();
                        //添加检测到的特征到当前跟踪的特征中
                        points[0].insert(points[0].end(), features.begin(), features.end());
                        initial.insert(initial.end(), features.begin(), features.end());
                }
                //对于序列中的第一幅图像
                if (gray_prev.empty())
                {
                        gray.copyTo(gray_prev);
                }
                //2.跟踪特征点
                calcOpticalFlowPyrLK(
                        gray_prev, gray,        //两幅连续图
                        points[0],                                //图1中的输入点坐标
                        points[1],                                //图2中的输入点坐标
                        status,                                                //跟踪成功
                        err);                                                        //跟踪失误
                //2.遍历所有跟踪的点进行筛选
                int k = 0;
                for (int i = 0; i < points[1].size(); i++)
                {
                        //是否需要保留该点?
                        if (acceptTrackedPoint(i))
                        {
                                //进行保留
                                initial[k] = initial;
                                points[1][k++] = points[1];
                        }
                }
                //去除不成功的点
                points[1].resize(k);
                initial.resize(k);
                //3.处理接受的跟踪点
                handleTrackedPoints(frame, output);
                //4.当前帧的点和图像变为前一帧的点和图像
                swap(points[1], points[0]);
                swap(gray_prev, gray);
        }
};

//帧处理函数:canny边缘检测//得到轮廓
void canny(cv::Mat& img, cv::Mat& out) {
        //灰度变换
        if (img.channels()==3)
                cvtColor(img,out,CV_BGR2GRAY);
        // canny算子求边缘
        Canny(out,out,100,200);
        //颜色反转,看起来更舒服些
        cv::threshold(out,out,128,255,cv::THRESH_BINARY_INV);
}

void help()
{
        printf("\nDo background segmentation, especially demonstrating the use of cvUpdateBGStatModel().\n"
                "Learns the background at the start and then segments.\n"
                "Learning is togged by the space key. Will read from file or camera\n"
                "Usage: \n"
                "                        ./bgfg_segm [--camera]=<use camera, if this key is present>, [--file_name]=<path to movie file> \n\n");
}

const char* keys =
{
        "{c |camera   |false    | use camera or not}"
        "{fn|file_name|tree.avi | movie file             }"
};
void find_track(cv::Mat frame,::Mat image)
{    //int walk_frame=1;
         //CvSVMParams params;
     //params.svm_type    = CvSVM::C_SVC;
    // params.kernel_type = CvSVM::RBF;
     //params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

     // 对SVM进行训练
     CvSVM SVM;
        //char *path = "C:\\Users\\Lenovo\\Desktop\\io\\test.txt"; // 你要创建文件的路径
        //ofstream fout(path);
        Size_<int> Size2i;
        cv::namedWindow("frame");
        std::vector<std::vector<cv:oint>>contours;
        cv::Mat result(image.size(),CV_8U,cv::Scalar(255));
        cv::findContours(image,
                contours,//轮廓的数组
                CV_RETR_EXTERNAL,//获取外轮廓
                CV_CHAIN_APPROX_NONE);//获取每个轮廓的每个像素
        //在白色图像上绘制黑色轮廓
        //移除过长或过短的轮廓
        unsigned int cmin=200;//最小轮廓的长度
        unsigned int cmax=1000;//最大轮廓的长度
        std::vector<std::vector<cv:oint>>::const_iterator itc=contours.begin();
        while(itc!=contours.end()){
                if(itc->size()<cmin||itc->size()>cmax)
                        itc=contours.erase(itc);
                else
                        ++itc;
        }
       
        for(int i=0;i<(int)contours.size();i++)
        {
        cv::Rect r0=cv::boundingRect(cv::Mat(contours));
        cv::rectangle(result,r0,cv::Scalar(0),2);
        cv::rectangle(frame,r0,cv::Scalar(0),2);
        //cout << i  << "面积:  " << contourArea(contours) << "周长:  " <<arcLength(contours, true) << endl;
        //cout<<" 高度"<<        r0.height<<"宽度"<<r0.width<<endl;
        }

        cv::drawContours(result,contours,
                -1,//绘制所有轮廓
                cv::Scalar(0),//颜色信息为黑色
                2);//轮廓线的绘制宽度为2
        cv::imshow("contours",result);
        cv::imshow("frame",frame);
        itc=contours.begin();
                int a=0;
                float featureData[4];
               
        while(itc!=contours.end()){
                //计算所有力矩
                cv::Moments mom=cv::moments(cv::Mat(*itc++));
                //绘制质心
                cv::circle(result,//质心转换为整数
                        cv:oint((int)(mom.m10/mom.m00),(int)(mom.m01/mom.m00)),
                        2,cv::Scalar(0),2);//绘制黑点
                //cv::Rect r0=cv::boundingRect(cv::Mat(contours[0]));
                cv::Rect r0=cv::boundingRect(cv::Mat(contours[a]));
            cv::rectangle(result,r0,cv::Scalar(0),2);
            cv::rectangle(frame,r0,cv::Scalar(0),2);
                //输出特征向量:质心坐标、面积、周长、高度、宽度
          //  cout<<a<<":"<<mom.m10/mom.m00 << " " <<mom.m01/mom.m00
                        //打印质心坐标
        //                << " " << contourArea(contours[a]) << " "
        //                <<arcLength(contours[a], true) <<" "<<        r0.height<<" "<<r0.width<<endl;       
                //step 1: Array -> Matrix
                float featureData[2]={
                        arcLength(contours[a],true), r0.height};
             Mat featureDataMat(1,2, CV_32FC1, featureData);
            //step 2: New and Load a SVM
            SVM.load( "SVM_Data.xml"  );
            //step 3: Predict
           float ret = SVM.predict(featureDataMat);
           cout<<ret<<endl;
           if(ret==-1)
           {
                   cout<<"walk"<<endl;
          
           }
           else
                   cout<<"bend"<<endl;
                 //vector[6]={{mom.m10/mom.m00},{mom.m01/mom.m00},{contourArea(contours[0])},{arcLength(contours[0],true)},{r0.height},{r0.width}};
                /*
                if (fout)
                {
                fout<<mom.m10/mom.m00<<" "<<mom.m01/mom.m00<<" "<<contourArea(contours[a])
                        <<" "<<arcLength(contours[a], true)  <<" "<<r0.height<<" "<<r0.width<<"\n"; // 使用与cout同样的方式进行写入
                }
                */
            //cv::rectangle(result,r0,cv::Scalar(0),2);
          //  cout << a << "面积:  " << contourArea(contours[a]) << "周长:  " <<arcLength(contours[a], true) << endl;
                a++;
                };               
            //fout.close();
                }

int main(int argc, char *argv[])
{
        cout<<"选择a:提取特征到train.txt"<<"选择b:SVM预测"<<"选择c:进行SVM训练到train.XML"<<endl;
        char x;
        cin.get(x);
        if(x=='a')
        {
            char *path = "C:\\Users\\lenovo\\Desktop\\train_bend.txt"; // 你要创建文件的路径
                ofstream fout(path);
                cv::VideoCapture capture("bend.avi");
                if(!capture.isOpened())
                        return 0;
                //当前视频帧
                cv::Mat frame;
                //前景图像
                //cv::namedWindow("canny");
                cv::Mat foreground;
                cv::namedWindow("foreground");
                //cv::namedWindow("frame");
                //cv::namedWindow(WINDOW2_NAME);
                //使用默认的Mixture of Gaussian 对象
                cv::BackgroundSubtractorMOG mog;  
                bool stop(false);
                //capture.read(frame);
                //遍历每一帧
                //cv::imshow("frame",frame);
                while(!stop)
                {
                        //读取下一帧
                        if(!capture.read(frame))
                                break;
                        //更新并返回前景
                        cv::medianBlur(frame,frame,1);//使用中值滤波器去除噪点
                        mog(frame,foreground,0.001);
                        cv::GaussianBlur(foreground,foreground,cv::Size(5,5),3);//高斯平滑处理
                        //cv::blur(foreground,foreground,Size(5,5));
                        //cv::imshow("frame",foreground);
                        cv::threshold(foreground,foreground,128,255,cv::THRESH_BINARY);
                        //对图像取反
                        //cv::threshold(foreground,foreground,128,255,cv::THRESH_BINARY_INV);
                        //显示前景
                        cv::erode(foreground,foreground,cv::Mat(),cv:oint(-1,-1),1);//腐蚀图像
                        cv::dilate(foreground,foreground,cv::Mat(),cv:oint(-1,-1),1);//膨胀图像
                        cv::dilate(foreground,foreground,cv::Mat(),cv:oint(-1,-1),5);//膨胀图像
                        cv::erode(foreground,foreground,cv::Mat(),cv:oint(-1,-1),2);//腐蚀图像
                        //cv::GaussianBlur(foreground,foreground,cv::Size(5,5),1.5);//高斯平滑处理
                        //cv::imshow("frame",frame);
                        cv::imshow("foreground",foreground);
                Size_<int> Size2i;
                cv::namedWindow("frame");
                std::vector<std::vector<cv:oint>>contours;
                cv::Mat result(foreground.size(),CV_8U,cv::Scalar(255));
                cv::findContours(foreground,
                    contours,//轮廓的数组
                    CV_RETR_EXTERNAL,//获取外轮廓
                    CV_CHAIN_APPROX_NONE);//获取每个轮廓的每个像素
                //在白色图像上绘制黑色轮廓
                //移除过长或过短的轮廓
                unsigned int cmin=100;//最小轮廓的长度
                unsigned int cmax=1000;//最大轮廓的长度
                std::vector<std::vector<cv:oint>>::const_iterator itc=contours.begin();
                while(itc!=contours.end()){
                         if(itc->size()<cmin||itc->size()>cmax)
                                 itc=contours.erase(itc);
                           else
                          ++itc;
                  }
                      
                  for(int i=0;i<(int)contours.size();i++)
                 {
                  cv::Rect r0=cv::boundingRect(cv::Mat(contours));
                  cv::rectangle(result,r0,cv::Scalar(0),2);
                  cv::rectangle(frame,r0,cv::Scalar(0),2);
                 //cout << i  << "面积:  " << contourArea(contours) << "周长:  " <<arcLength(contours, true) << endl;
                //cout<<" 高度"<<        r0.height<<"宽度"<<r0.width<<endl;
                  }

               cv::drawContours(result,contours,
                  -1,//绘制所有轮廓
                  cv::Scalar(0),//颜色信息为黑色
                 2);//轮廓线的绘制宽度为2
             cv::imshow("contours",result);
             cv::imshow("frame",frame);
             itc=contours.begin();
                 int a=0;
             while(itc!=contours.end()){
                //计算所有力矩
                cv::Moments mom=cv::moments(cv::Mat(*itc++));
                //绘制质心
                cv::circle(result,//质心转换为整数
                        cv:oint((int)(mom.m10/mom.m00),(int)(mom.m01/mom.m00)),
                        2,cv::Scalar(0),2);//绘制黑点
                //cv::Rect r0=cv::boundingRect(cv::Mat(contours[0]));
                cv::Rect r0=cv::boundingRect(cv::Mat(contours[a]));
            cv::rectangle(result,r0,cv::Scalar(0),2);
            cv::rectangle(frame,r0,cv::Scalar(0),2);
                //输出特征向量:质心坐标、面积、周长、高度、宽度
            cout<<a<<":"<<mom.m10/mom.m00 << " " <<mom.m01/mom.m00
                        //打印质心坐标
                        << " " << contourArea(contours[a]) << " "
                        <<arcLength(contours[a], true) <<" "<<        r0.height<<" "<<r0.width<<endl;       
                if (fout)
                {
                fout<<arcLength(contours[a], true)  <<" "<<r0.height<<endl; // 使用与cout同样的方式进行写入
                }
            //cv::rectangle(result,r0,cv::Scalar(0),2);
            //  cout << a << "面积:  " << contourArea(contours[a]) << "周长:  " <<arcLength(contours[a], true) << endl;
                a++;
                };               
                        if(cv::waitKey(10)>=0)
                                stop=true;
                        //cv::imshow("frame",frame);
                }
                //关闭视频文件
                //将由析构函数调用,所以必须
                capture.release();
                fout.close();
        }

       
        if(x=='b')
        {
                //cv::VideoCapture capture("walk_4.avi");
                cv::VideoCapture capture("bend.avi");
                if(!capture.isOpened())
                        return 0;
                //当前视频帧
                cv::Mat frame;
                //前景图像
                //cv::namedWindow("canny");
                cv::Mat foreground;
                cv::namedWindow("foreground");
                //cv::namedWindow("frame");
                //cv::namedWindow(WINDOW2_NAME);
                //使用默认的Mixture of Gaussian 对象
                cv::BackgroundSubtractorMOG mog;  
                bool stop(false);
                //capture.read(frame);
                //遍历每一帧
                //cv::imshow("frame",frame);
                while(!stop)
                {
                        //读取下一帧
                        if(!capture.read(frame))
                                break;
                        //更新并返回前景
                        cv::medianBlur(frame,frame,1);//使用中值滤波器去除噪点
                        mog(frame,foreground,0.001);
                        cv::GaussianBlur(foreground,foreground,cv::Size(5,5),3);//高斯平滑处理
                        //cv::blur(foreground,foreground,Size(5,5));
                        //cv::imshow("frame",foreground);
                        cv::threshold(foreground,foreground,128,255,cv::THRESH_BINARY);
                        //对图像取反
                        //cv::threshold(foreground,foreground,128,255,cv::THRESH_BINARY_INV);
                        //显示前景
                        cv::erode(foreground,foreground,cv::Mat(),cv::Point(-1,-1),1);//腐蚀图像
                        cv::dilate(foreground,foreground,cv::Mat(),cv::Point(-1,-1),1);//膨胀图像
                        cv::dilate(foreground,foreground,cv::Mat(),cv::Point(-1,-1),5);//膨胀图像
                        cv::erode(foreground,foreground,cv::Mat(),cv::Point(-1,-1),2);//腐蚀图像
                        //updateMotionHistory( foreground, foreground, 10 ,100);//更新历史图像
                        //cv::GaussianBlur(foreground,foreground,cv::Size(5,5),1.5);//高斯平滑处理
                        //cv::imshow("frame",frame);
                        cv::imshow("foreground",foreground);
                        find_track(frame,foreground);
                        //                draw_contours(foreground);
                        //std::vector<std::vector<cv::Point>>contours;
                        //cv::findContours(foreground,
                        //        contours,//轮廓的数组
                        //        CV_RETR_EXTERNAL,//获取外轮廓  
                        //        CV_CHAIN_APPROX_NONE);//获取每个轮廓的每个像素*/
                        //在白色图像上绘制黑色轮廓
                        //移除过长或过短轮廓
                        //canny(foreground,foreground);
                        //cv::imshow(WINDOW2_NAME,foreground);
                        //VideoProcessor processor;
                        //创建视频跟踪器实例
                        //引入延迟
                        //也可以通过按键停止
                        if(cv::waitKey(10)>=0)
                                stop=true;
                        //cv::imshow("frame",frame);
                }
        }
        /*
        //创建视频处理实例
        VideoProcessor processor;
        //创建前景背景分短期
        //  FeatureTracker tracker;
        BGFGSegmentor segmentor;
        //segmentor.setThreshold(25);
        //打开输入视频
        processor.setInput ("../video.avi");
        processor.setFrameProcessor (&segmentor);
        //   processor.displayInput ("Current Frame");
        processor.displayOutput ("Extract Foreground");
        //设置每一帧的延时
        processor.setDelay (1000./processor.getFrameRate ());
        //设置帧处理函数,可以任意
        // processor.setOutput ("C:\\Users\\Lenovo\\Desktop\\im\\bikeout.avi");
        //processor.setOutput ("C:\\Users\\Lenovo\\Desktop\\im\\bikeout.jpg");
        processor.run();
        */
        if(x=='c')
        {
                ifstream input("C:\\Users\\lenovo\\Desktop\\train_data.txt");
        if(!input)
        {
         cerr << "error!";
         }
     string line, word;
     float trainingData[850][2];
     int m(0), n(0);
     while (getline(input, line)) //读取每行的字符
     {
     istringstream isstream(line);
     while (isstream >> word) //读取该行用空格隔开的单个数据
     {
     trainingData[m][n] = stof(word); //string转float
     ++n;
      }
      ++m;
     n = 0;
     }
         float labels[850] = { };
         int k=0;
         for(int i=0;i<410;i++)
         {labels[k]=-1;
         k++;
         }
        // for(int i=379;i<400;i++)
        // {
        //         labels[k]=1;
        //         k++;
        // }
     Mat labelsMat(850,1, CV_32FC1, labels);
     //float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
     Mat trainingDataMat(850,2, CV_32FC1, trainingData);
     // 设置SVM参数
     CvSVMParams params;
     params.svm_type    = CvSVM::C_SVC;
     params.kernel_type = CvSVM::RBF;
     params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

     // 对SVM进行训练
     CvSVM SVM;
     SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);
     SVM.save("SVM_Data.xml");
         }
        return 0;
}




回复

使用道具 举报

发表于 2015-7-22 08:15:11 | 显示全部楼层
回复 支持 反对

使用道具 举报

发表于 2015-7-24 14:38:12 | 显示全部楼层
这都能?你也得让我们看看结果啊。
回复 支持 反对

使用道具 举报

发表于 2015-7-24 16:30:37 | 显示全部楼层
回复 支持 反对

使用道具 举报

发表于 2018-8-17 13:30:39 | 显示全部楼层
请问楼主这本书是什么名字啊,我也想学习一下
回复 支持 反对

使用道具 举报

发表于 2018-8-26 15:56:23 | 显示全部楼层
这什么意思..开源diamante?
回复 支持 反对

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

手机版|OpenCV中文网站

GMT+8, 2024-4-24 07:35 , Processed in 0.017265 second(s), 16 queries .

Powered by Discuz! X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回复 返回顶部 返回列表