All Projects → bcaine → maddux

bcaine / maddux

Licence: MIT license
A Python Robot Arm Toolkit and Simulation Environment for Education

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to maddux

scikit-robot
A Flexible Framework for Robot Control in Python
Stars: ✭ 70 (+180%)
Mutual labels:  robot, kinematics
Handeye calib camodocal
Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
Stars: ✭ 364 (+1356%)
Mutual labels:  robot, kinematics
wb-toolbox
Simulink toolbox to rapidly prototype robot controllers
Stars: ✭ 20 (-20%)
Mutual labels:  robot, kinematics
Hexapod
Blazing fast hexapod robot simulator for the web.
Stars: ✭ 370 (+1380%)
Mutual labels:  robot, kinematics
EvoArm
An open-source 3D-printable robotic arm
Stars: ✭ 114 (+356%)
Mutual labels:  robot, robot-arm
Kinematics
🤖 JavaScript 6DOF robot kinematics library
Stars: ✭ 187 (+648%)
Mutual labels:  robot, kinematics
kinpy
Simple kinematics calculation toolkit for robotics
Stars: ✭ 48 (+92%)
Mutual labels:  robot, kinematics
Hexapod Robot Simulator
A hexapod robot simulator built from first principles
Stars: ✭ 577 (+2208%)
Mutual labels:  robot, kinematics
Makelangelo Firmware
CNC firmware for many different control boards and kinematic systems. Originally the brain of the Makelangelo art robot.
Stars: ✭ 116 (+364%)
Mutual labels:  robot, kinematics
Venom
All Terrain Autonomous Quadruped
Stars: ✭ 145 (+480%)
Mutual labels:  robot, kinematics
Pybotics
The Python Toolbox for Robotics
Stars: ✭ 192 (+668%)
Mutual labels:  robot, kinematics
OPQ-SetuBot
基于botoy和OPQBot的色图机器人
Stars: ✭ 194 (+676%)
Mutual labels:  robot
diffbot
DiffBot is an autonomous 2wd differential drive robot using ROS Noetic on a Raspberry Pi 4 B. With its SLAMTEC Lidar and the ROS Control hardware interface it's capable of navigating in an environment using the ROS Navigation stack and making use of SLAM algorithms to create maps of unknown environments.
Stars: ✭ 172 (+588%)
Mutual labels:  robot
icra20-hand-object-pose
[ICRA 2020] Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands
Stars: ✭ 42 (+68%)
Mutual labels:  robot
realant
RealAnt robot platform for low-cost, real-world reinforcement learning
Stars: ✭ 40 (+60%)
Mutual labels:  robot
ros webconsole
🌐 A ROS WEB console to control remotely your robot. Based with robotwebtools.
Stars: ✭ 71 (+184%)
Mutual labels:  robot
sixi
Sixi Robot Arm
Stars: ✭ 23 (-8%)
Mutual labels:  robot
Skycam
Moving a weight hung on four cables pulled by motors at the top corners of a box
Stars: ✭ 25 (+0%)
Mutual labels:  robot
open manipulator simulations
ROS Simulation for OpenManipulator
Stars: ✭ 15 (-40%)
Mutual labels:  robot
FuyaoBot
A QQ bot bases on Mirai, Spring Boot, MySQL and Mybatis Plus.
Stars: ✭ 30 (+20%)
Mutual labels:  robot

Maddux

Robot Arm and Simulation Environment

You can view the complete documentation here.

Created to use in a project for Robert Platt's Robotics Course

Features
  • Arbitrary Length Arms
  • Forward Kinematics
  • Inverse Kinematics
  • Simulation Environment (with objects like Balls, Targets, Obstacles)
  • 3D Environment Animations
  • 3D Arm Animations
  • End Effector Position, Velocity

Arm Visualization and Animations

import numpy as np
from maddux.objects import Obstacle, Ball
from maddux.environment import Environment
from maddux.robots import simple_human_arm

obstacles = [Obstacle([1, 2, 1], [2, 2.5, 1.5]),
             Obstacle([3, 2, 1], [4, 2.5, 1.5])]
ball = Ball([2.5, 2.5, 2.0], 0.25)

q0 = np.array([0, 0, 0, np.pi / 2, 0, 0, 0])
human_arm = simple_human_arm(2.0, 2.0, q0, np.array([3.0, 1.0, 0.0]))

env = Environment(dimensions=[10.0, 10.0, 20.0],
                  dynamic_objects=[ball],
                  static_objects=obstacles,
                  robot=human_arm)

q_new = human_arm.ikine(ball.position)
human_arm.update_angles(q_new)
env.plot()

Example Arm

Environment Visualizations and Animations

from maddux.environment import Environment
from maddux.objects import Ball, Target

ball = Ball([2, 0, 2], 0.25)
target = Target([2, 10, 2], 0.5)
environment = Environment(dynamic_objects=[ball], static_objects=[target])

release_velocity = np.array([0, 15, 5])
ball.throw(release_velocity)

# Either run environment for n seconds
environment.run(2.0)
# And plot the result
environment.plot()

# Or, you can animate it while running
environment.animate(2.0)

Example Plot

Arm Usage

from maddux.robots.link import Link
from maddux.robots.arm import Arm

# Create a series of links (each link has one joint)
L1 = Link(0,0,0,1.571)
L2 = Link(0,0,0,-1.571)
L3 = Link(0,0.4318,0,-1.571)
L4 = Link(0,0,0,1.571)
L5 = Link(0,0.4318,0,1.571)
links = np.array([L1, L2, L3, L4, L5])

# Initial arm angle
q0 = np.array([0, 0, 0, np.pi/2, 0])

# Create arm
r = Arm(links, q0, '1-link')

Use with Deep Q Learning

This library was created with the intent of experimenting with reinforcement learning on robot manipulators. nivwusquorum/tensorflow-deepq provides an excellent tool to experiment with Deep Q Learning.

maddux/rl_experiments/ provides full reinforcement learning classes and arm environments for doing obstacle avoidance and manipulator control using the above Deep Q Learning framework.

For fun, here's some examples

Iteration 0

Iteration 0

Iteration 100

Iteration 100

Iteration 1000

Iteration 1000

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].