MakerGram Logo

    MakerGram

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Popular
    • Tags
    • Users
    • Groups

    [Solved] Help needed for face detection -deep learning

    General Discussion
    4
    16
    1534
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Nandu
      Nandu last edited by salmanfaris

      I have a code for detecting faces but now i need to count no:of faces .link text

      # OpenCV Python program to detect cars in video frame
      # import libraries of python OpenCV 
      import cv2
      #import numpy as np
      # capture frames from a video
      cap = cv2.VideoCapture('video.avi')
       
      # Trained XML classifiers describes some features of some object we want to detect
      car_cascade = cv2.CascadeClassifier('cars.xml')
       
      # loop runs if capturing has been initialized.
      while True:
          # reads frames from a video
          ret, frames = cap.read()
           
          # convert to gray scale of each frames
          gray = cv2.cvtColor(frames, cv2.COLOR_BGR2GRAY)
           
          #color fill white
          frames.fill(255)
          # or img[:] = 255
          # Detects cars of different sizes in the input image
          cars = car_cascade.detectMultiScale(gray, 1.1, 1)
           
          # To draw a rectangle in each cars
          for (x,y,w,h) in cars:
              cv2.rectangle(frames,(x,y),(x+w,y+h),(0,0,0),-1)
       
         # Display frames in a window 
          
          cv2.imshow('video2', frames)
           
          # Wait for Esc key to stop
          if cv2.waitKey(33) == 27:
              break
       
      # De-allocate any associated memory usage
      cv2.destroyAllWindows()
      
      1 Reply Last reply Reply Quote 0
      • salmanfaris
        salmanfaris last edited by salmanfaris

        Hi @Nandu, You can increment a variable each time when detect faces, is that help?

        Nandu 1 Reply Last reply Reply Quote 0
        • Nandu
          Nandu @salmanfaris last edited by

          @salmanfaris yeah that's what i want.But i am not able to understand where i should place my variable in the above mentioned code.

          1 Reply Last reply Reply Quote 0
          • A
            arunksoman last edited by

            Follow These steps

            1. Create a virtual enviroment and activate virtial environment
            python -m venv venv
            

            Activate venv for windows using following command:

            .\venv\Scripts\activate
            

            For Ubuntu:

            source venv/bin/activate
            
            1. Install necessary packages on venv
            pip install opencv-python
            
            pip install imutils
            
            1. Create Folder structure as shown below in your workspace
            TestPrograms  
            |
            ├─ cascades
            │  └─ haarcascade_frontalface_default.xml
            ├─ detect_faces.py
            ├─ images
            │  └─ obama.jpg
            ├─ utilities
            │  └─ facedetector.py
            
            
            1. Program for utililities/facedetector.py given below:
            import cv2
            class FaceDetector:
                def __init__(self, face_cascade_path):
                    # Load the face detector
                    self.face_cascade = cv2.CascadeClassifier(face_cascade_path)
            
                def detect(self, image, scale_factor=1.2, min_neighbors=3):
                    # Detect faces in the image
                    boxes = self.face_cascade.detectMultiScale(image, scale_factor, min_neighbors, flags=cv2.CASCADE_SCALE_IMAGE, minSize=(30,30))
            
                    # Return the bounding boxes
                    return boxes
            
            1. program on detect_faces.py
            from utilities.facedetector import FaceDetector
            import imutils
            import cv2
            
            # Define paths
            image_path = 'images/obama.jpg'
            cascade_path = 'cascades/haarcascade_frontalface_default.xml'
            
            # Load the image and convert it to greyscale
            image = cv2.imread(image_path)
            image = imutils.resize(image, 600, 600)
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            
            # Find faces in the image
            detector = FaceDetector(cascade_path)
            face_boxes = detector.detect(gray, 1.2, 5)
            print("{} face(s) found".format(len(face_boxes)))
            
            # Loop over the faces and draw a rectangle around each
            for (x, y, w, h) in face_boxes:
                cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)
            
            # Show the detected faces
            cv2.imshow("Faces", image)
            if(cv2.waitKey(0)):
             cv2.destroyAllWindows()
            
            1. Links to necessary files:
              Haar cascade frontal face
              Obama Family Image
            Nandu 1 Reply Last reply Reply Quote 2
            • Nandu
              Nandu @arunksoman last edited by

              @arunksoman thankyou 🥳

              A 1 Reply Last reply Reply Quote 0
              • A
                arunksoman @Nandu last edited by

                @Nandu But I have to mention that it is not a deep learning method. It is based on Integral images(Viola-Jones algorithm), which is basically something about ML. From opencv 3.4.3 there is a DNN module. This module help us to load caffemodels, torch models as well as tensorflow models. You can find out caffemodels on the Internet in order to detect faces. Using those we can make face detection quite efficiently. If you have any doubt feel free to ask here.

                Nandu 1 Reply Last reply Reply Quote 2
                • Nandu
                  Nandu @arunksoman last edited by

                  @arunksoman how this code helps me to count faces if deeplearning isn't used.

                  A 1 Reply Last reply Reply Quote 0
                  • A
                    arunksoman @Nandu last edited by arunksoman

                    @Nandu Please read the comment given above carefully and search how the viola-jones algorithm works. Sorry for misunderstanding what you say. That is why edited comment.

                    1 Reply Last reply Reply Quote 0
                    • salmanfaris
                      salmanfaris last edited by

                      @Nandu Did you complete? excited to see.

                      Nandu 1 Reply Last reply Reply Quote 0
                      • Nandu
                        Nandu @salmanfaris last edited by salmanfaris

                        @salmanfaris in the below terminal count shows.some steps i have followed in a different manner.Thank you for helping me!🙂

                        IMG-20200314-WA0032.jpg

                        A 1 Reply Last reply Reply Quote 1
                        • First post
                          Last post

                        Recent Posts

                        • R

                          I am trying to set up a janus webrtc to stream an RTSP to an HTML page.
                          I have followed the getting-started steps by Janus-gateway official github repo.

                          Since I am new to web development. I do not understand how to host the Webrtc server. can anyone guide me to set up an HTML page that can display a video stream from an RTSP server using janus webrtc?

                          • read more
                        • @zainmuhammed Can try capturing the GPS when the device is starting the loop instead after joining the LoRaWAN and see?

                          You can put the GPS value on top of the loop or setup function.

                          Also, what kind of gateway are you using? Is it configured okay, OTA is done?

                          • read more
                        • @salmanfaris Today I tried after connecting a 18650 cell,
                          WhatsApp Image 2024-04-12 at 10.40.06_c7d1947e.jpg WhatsApp Image 2024-04-12 at 10.40.05_897b8bb6.jpg
                          Data getting in console after integration of both lora and gps.
                          3f45cfe7-0061-4328-8c55-ef0a73385203-image.png
                          here you can see that GPS value is 0,0. also in my previous post you can see that GPS value is not reading.
                          Also the status LED is active after it is connected to the satellite.

                          • read more
                        • Hi @zainmuhammed ,

                          Can you share the GPS and LoRa output when it’s working?

                          Also can try capturing the GPS when the device is starting the loop instead after joining the LoRaWAN and see?

                          Also make sure the device provides have enough to modules. The GPS need more power when you cold start.

                          • read more
                        • @zainmuhammed
                          this is the code

                          #include <Arduino.h> #include <U8x8lib.h> #include <TinyGPS++.h> #include <SoftwareSerial.h> static const int RXPin = 1, TXPin = 2; static const uint32_t GPSBaud = 9600; // The TinyGPS++ object TinyGPSPlus gps; // The serial connection to the GPS device SoftwareSerial ss(RXPin, TXPin); U8X8_SSD1306_128X64_NONAME_HW_I2C u8x8(/*reset=*/U8X8_PIN_NONE); // U8X8_SSD1306_128X64_NONAME_SW_I2C u8x8(/*clock=*/ SCL, /*data=*/ SDA, /*reset=*/ U8X8_PIN_NONE); // OLEDs without Reset of the Display static char recv_buf[512]; static bool is_exist = false; static bool is_join = false; static int led = 0; static int at_send_check_response(char *p_ack, int timeout_ms, char *p_cmd, ...) { int ch; int num = 0; int index = 0; int startMillis = 0; va_list args; char cmd_buffer[256]; // Adjust the buffer size as needed memset(recv_buf, 0, sizeof(recv_buf)); va_start(args, p_cmd); vsprintf(cmd_buffer, p_cmd, args); // Format the command string Serial1.print(cmd_buffer); Serial.print(cmd_buffer); va_end(args); delay(200); startMillis = millis(); if (p_ack == NULL) { return 0; } do { while (Serial1.available() > 0) { ch = Serial1.read(); recv_buf[index++] = ch; Serial.print((char)ch); delay(2); } if (strstr(recv_buf, p_ack) != NULL) { return 1; } } while (millis() - startMillis < timeout_ms); return 0; } static void recv_prase(char *p_msg) { if (p_msg == NULL) { return; } char *p_start = NULL; int data = 0; int rssi = 0; int snr = 0; p_start = strstr(p_msg, "RX"); if (p_start && (1 == sscanf(p_start, "RX: \"%d\"\r\n", &data))) { Serial.println(data); u8x8.setCursor(2, 4); u8x8.print("led :"); led = !!data; u8x8.print(led); if (led) { digitalWrite(LED_BUILTIN, LOW); } else { digitalWrite(LED_BUILTIN, HIGH); } } p_start = strstr(p_msg, "RSSI"); if (p_start && (1 == sscanf(p_start, "RSSI %d,", &rssi))) { u8x8.setCursor(0, 6); u8x8.print(" "); u8x8.setCursor(2, 6); u8x8.print("rssi:"); u8x8.print(rssi); } p_start = strstr(p_msg, "SNR"); if (p_start && (1 == sscanf(p_start, "SNR %d", &snr))) { u8x8.setCursor(0, 7); u8x8.print(" "); u8x8.setCursor(2, 7); u8x8.print("snr :"); u8x8.print(snr); } } void setup(void) { u8x8.begin(); u8x8.setFlipMode(1); u8x8.setFont(u8x8_font_chroma48medium8_r); ss.begin(GPSBaud); Serial.begin(GPSBaud); pinMode(LED_BUILTIN, OUTPUT); digitalWrite(LED_BUILTIN, HIGH); Serial1.begin(9600); Serial.print("E5 LORAWAN TEST\r\n"); u8x8.setCursor(0, 0); if (at_send_check_response("+AT: OK", 100, "AT\r\n")) { is_exist = true; at_send_check_response("+ID: DevEui", 1000, "AT+ID=DevEui,\"xxxxx\"\r\n"); // replace 'xxxxxxxxxxxxx' with your DevEui at_send_check_response("+ID: AppEui", 1000, "AT+ID=AppEui,\"xxxxxxx\"\r\n"); // replace 'xxxxxxxxxxxxx' with your AppEui at_send_check_response("+KEY: APPKEY", 1000, "AT+KEY=APPKEY,\"xxxxxxxxx\"\r\n"); // replace 'xxxxxxxxxxxxx' with your AppKey at_send_check_response("+ID: DevAddr", 1000, "AT+ID=DevAddr\r\n"); at_send_check_response("+ID: AppEui", 1000, "AT+ID\r\n"); at_send_check_response("+MODE: LWOTAA", 1000, "AT+MODE=LWOTAA\r\n"); at_send_check_response("+DR: IN865", 1000, "AT+DR=IN865\r\n"); // Change FREQ as per your location at_send_check_response("+CH: NUM", 1000, "AT+CH=NUM,0-2\r\n"); at_send_check_response("+CLASS: C", 1000, "AT+CLASS=A\r\n"); at_send_check_response("+PORT: 8", 1000, "AT+PORT=8\r\n"); delay(200); u8x8.setCursor(5, 0); u8x8.print("LoRaWAN"); is_join = true; } else { is_exist = false; Serial.print("No E5 module found.\r\n"); u8x8.setCursor(0, 1); u8x8.print("unfound E5 !"); } u8x8.setCursor(2, 4); u8x8.print("led :"); u8x8.print(led); } void loop(void) { if (is_exist) { int ret = 0; if (is_join) { ret = at_send_check_response("+JOIN: Network joined", 12000, "AT+JOIN\r\n"); if (ret) { is_join = false; } else { at_send_check_response("+ID: AppEui", 1000, "AT+ID\r\n"); Serial.print("JOIN failed!\r\n\r\n"); delay(5000); } } else { gps.encode(ss.read()); float a=gps.location.lat(); float b=gps.location.lng(); Serial.println(a); Serial.println(b); char cmd[128]; sprintf(cmd, "AT+CMSGHEX=\"%04X%04X\"\r\n", (float)a,(float)b); ret = at_send_check_response("Done", 5000, cmd); if (ret) { recv_prase(recv_buf); } else { Serial.print("Send failed!\r\n\r\n"); } delay(5000); } } else { delay(1000); } }

                          9135d5d3-6277-4c60-81df-a2acac65c93d-image.png

                          • read more
                        By MakerGram | A XiStart Initiative | Built with ♥ NodeBB
                        Copyright © 2023 MakerGram, All rights reserved.
                        Privacy Policy | Terms & Conditions | Disclaimer | Code of Conduct