Mobile robot elevator operation


Project maintained by Ashay-Lokhande Hosted on GitHub Pages — Theme by mattgraham

Brief Description

For mobile robots to be effective in a multi-floor environment, they must be able to operate an elevator and determine when the elevator reaches a floor. In this project, I wrote a C++ node in ROS (Robot Operating System) that subscribes to a USB camera’s video feed and analyzes that feed frame by frame to detect a change in the lights on the elevator buttons. This project was created to work specifically on the elevators of the Gates Dell Complex located on the campus of UT Austin.

Introduction

The Building Wide Intelligence’s primary mission is to design fully autonomous robots that can become a permanent part of the environment in the Gates-Dell Complex. In the process of becoming fully autonomous, these robots have to be able to navigate the interior environment of the main building. Currently, the robots are able to navigate a pre-configured map environment which is configured using a separate method. Ultimately, the goal is to have autonomous robots which can navigate variable environments (environments in which objects don’t have a fixed location, but they move around dynamically as individuals interact with the environment).

A part of this mission is to implement functionality which would allow the robot to move in between floors. Currently, the robots have very little functionality related to interacting with the elevators. In this research paper, we explore an important components of this key functionality when interacting with the elevator - how the robot is to determine what specific floor it is on.

Using a secondary USB camera, we have programmed a Robot Operating System node which can take a clear video feed (presumably in an empty elevator) of the elevator button panel and detect which specific buttons are lit. This detection is specific to circular buttons with floor indicator lights in the center of the button. First, we detect the specific buttons and the numbers placed next to them. We use this detection of buttons to further work on enabling the node to detect the circles in the center and then map a specific button to the relevant floor number in real time.

Demos

  1. https://www.youtube.com/watch?v=sZY1VVkkkG4&feature=youtu.be This demo features a video displaying a primitive version of our floor detector. During this stage of development, the program counts the number of white pixels seen on each button in order to determine the state of the button. It states that the floor corresponding to that button has been reached when the white pixel count drops by an amount that has been hard coded. The idea behind this implementation is that the white pixel count decreases when the light on the button turns off. It is evident that this method will not work well due to the fact that when people move within the elevator, or even the elevator door opens, the lighting changes and can give false positives. For example, the elevator button corresponding to floor seven changes quite a bit whenever the door opens. This is why it incorrectly states that floor seven has been reached multiple times. The program also only checks to see if the floor has been reached by checking when the lights turn off. Thus, when the elevator reaches the bottom floor and there are still buttons pressed for the floors above it, the elevator resets the buttons and the program is tricked into thinking that it has simultaneously reached all the floors whose buttons were previously lit.

Instructions

Installation instructions:

Authors and Contributors

@archiejain1021 @Ashay-Lokhande @brahmasp