This release will only work correctly on a RoboRIO that has been reimaged for the 2017 season.
There’s a couple of important things to note here:
Because 3rd party drivers are now being supported separately from WPILib, CANTalon has been removed from this release. We will be releasing support in a separate library hopefully within a few days.
pynetworktables has been rewritten based on the ntcore library, and it should support all of the new cool things that ntcore supports
The HAL changes in WPILib this year mean that:
CPU usage should be significantly less than it was last year (observed 4-10% idle usage, compared to 20% last year)
Tests should run a lot faster (I’ve seen 50% improvements on basic tests
CameraServer has been removed for now. We will have a python-compatible version of the new cscore library available in the near future, which should result in a significantly upgraded experience for image processing
We’ve improved our test coverage significantly, so there’s less chance of things breaking
Additionally, the documentation site has been restructured significantly, so that all RobotPy projects essentially share the same set of documentation – no more need to remember five different sites!
Thanks to everyone who contributed to this release, but particularly @james-ward @auscompgeek @Twinters007 and @ArthurAllshire
As some of you may be aware, for a number of years we’ve had an official IRC channel for RobotPy. However, IRC is a bit obscure and isn’t always the friendliest to beginners, so I’m switching the official robotpy support channel to use gitter instead.
If you’re interested in helping with the ongoing RobotPy 2017 WPILib updates, please join the room to find out how you can contribute!
pynetworktables has been rewritten in the style of ntcore, and now fully
supports all of the NT3 features that are available in ntcore. For the most
part…. it should all work. There are a few breaking changes I can think of:
Connection listeners are different. Sorry.
The special array types are gone (yay) and so is the networktables2 package
It’s easier to make client connections (though the old way still works)
… and that’s about it
I haven’t had the opportunity to try this on a real robot yet, BUT the unit
tests have 75% coverage and it works on my machine, so it’s probably good to go
if you’re using this on a driver station or coprocessor. Try it out, let me know
how it works!
Installation is super easy if you already have python and pip installed:
pip install --pre pynetworktables
Also, if you’re using pynetworktables2js, there’s an alpha release of that
available too, which accommodates some of the NT3 changes. However, more work
needs to be done to fully support all of the NT3 features in pynetworktables2js.
I’m happy to announce the release of an OpenCV input plugin for mjpg-streamer, which allows you to write simple little filter plugins that can process the image from a webcam, and change what is streamed out via HTTP. You can install the mjpg-streamer-cv or mjpg-streamer-py packages using the instructions on our github repo. Here’s an example filter plugin:
import numpy as np
def process(self, img):
:param img: A numpy array representing the input image
:returns: A numpy array to send to the mjpg-streamer
# silly routine that overlays a really large crosshair over the image
h = img.shape
w = img.shape
w2 = int(w/2)
h2 = int(h/2)
cv2.line(img, (int(w/4), h2), (int(3*(w/4)), h2), (0xff, 0, 0), thickness=3)
cv2.line(img, (w2, int(h/4)), (w2, int(3*(h/4))), (0xff, 0, 0), thickness=3)
This function is called after the filter module is imported.
It MUST return a callable object (such as a function or
f = MyFilter()
If you scp’ed this to the roborio, you could use the following command line to run it:
Our team used the OpenCV plugin on our robot this weekend with a python script to do image processing and NetworkTables operations (Lifecam 3000, 320x240, 15fps, 30 quality), and it seemed to be about 20% CPU usage. Not too shabby. In theory, you could use this on a RPi or other platform too, as I’ve pushed the changes (plus some significant build system improvements) to mjpg-streamer upstream.
RobotPy WPILib 2016.2.0 now has full CANTalon support including enhanced sensor support in simulation, the new motion profiling stuff that was introduced for 2016, and a bunch of new setter functions and other random status things. The simulation hal_data structures have been updated as well, which may break your tests. However, the new API should be easier to use and more consistent.
Additionally, PyFRC 2016.2.3 has been released, with a useful new feature that allows you to select autonomous mode via NetworkTables if you’re using the AutonomousModeSelector object to select autonomous modes (used in the Magicbot framework too). Check out this screenshot:
RobotPy releases can be downloaded from our github releases page, and pyfrc can be upgraded using Pip.