As some of you may be aware, for a number of years we’ve had an official IRC channel for RobotPy. However, IRC is a bit obscure and isn’t always the friendliest to beginners, so I’m switching the official robotpy support channel to use gitter instead.
If you’re interested in helping with the ongoing RobotPy 2017 WPILib updates, please join the room to find out how you can contribute!
pynetworktables has been rewritten in the style of ntcore, and now fully
supports all of the NT3 features that are available in ntcore. For the most
part…. it should all work. There are a few breaking changes I can think of:
Connection listeners are different. Sorry.
The special array types are gone (yay) and so is the networktables2 package
It’s easier to make client connections (though the old way still works)
… and that’s about it
I haven’t had the opportunity to try this on a real robot yet, BUT the unit
tests have 75% coverage and it works on my machine, so it’s probably good to go
if you’re using this on a driver station or coprocessor. Try it out, let me know
how it works!
Installation is super easy if you already have python and pip installed:
pip install --pre pynetworktables
Also, if you’re using pynetworktables2js, there’s an alpha release of that
available too, which accommodates some of the NT3 changes. However, more work
needs to be done to fully support all of the NT3 features in pynetworktables2js.
I’m happy to announce the release of an OpenCV input plugin for mjpg-streamer, which allows you to write simple little filter plugins that can process the image from a webcam, and change what is streamed out via HTTP. You can install the mjpg-streamer-cv or mjpg-streamer-py packages using the instructions on our github repo. Here’s an example filter plugin:
import numpy as np
def process(self, img):
:param img: A numpy array representing the input image
:returns: A numpy array to send to the mjpg-streamer
# silly routine that overlays a really large crosshair over the image
h = img.shape
w = img.shape
w2 = int(w/2)
h2 = int(h/2)
cv2.line(img, (int(w/4), h2), (int(3*(w/4)), h2), (0xff, 0, 0), thickness=3)
cv2.line(img, (w2, int(h/4)), (w2, int(3*(h/4))), (0xff, 0, 0), thickness=3)
This function is called after the filter module is imported.
It MUST return a callable object (such as a function or
f = MyFilter()
If you scp’ed this to the roborio, you could use the following command line to run it:
Our team used the OpenCV plugin on our robot this weekend with a python script to do image processing and NetworkTables operations (Lifecam 3000, 320x240, 15fps, 30 quality), and it seemed to be about 20% CPU usage. Not too shabby. In theory, you could use this on a RPi or other platform too, as I’ve pushed the changes (plus some significant build system improvements) to mjpg-streamer upstream.
RobotPy WPILib 2016.2.0 now has full CANTalon support including enhanced sensor support in simulation, the new motion profiling stuff that was introduced for 2016, and a bunch of new setter functions and other random status things. The simulation hal_data structures have been updated as well, which may break your tests. However, the new API should be easier to use and more consistent.
Additionally, PyFRC 2016.2.3 has been released, with a useful new feature that allows you to select autonomous mode via NetworkTables if you’re using the AutonomousModeSelector object to select autonomous modes (used in the Magicbot framework too). Check out this screenshot:
RobotPy releases can be downloaded from our github releases page, and pyfrc can be upgraded using Pip.
This is a bugfix release of RobotPy, and all RobotPy users are recommended to upgrade, particularly owners of NavX devices or those who want to use the PIDController object.
If you want to see the NavX stuff in action, one of the NavX samples that I ported over shows a robot rotating to a specific angle based on a button press, and it works in simulation (not tested on a real robot). Very cool demo – you will need to make sure you have the latest version of pyfrc installed as well.