A step-by-step guide to make your NERDSwerve move
Plus a few interesting projects you might want to try out with your NERDSwerve
Note: Branches named with develop_TunerX are TunerX configurations; branches named with develop_turdswerve are NEO configurations.
Step 1: Download and launch the TunerX app
Step 2: Once TunerX has opened up, click on the three lines in the left corner, and click on the Mechanisms page to select the Swerve generator
Step 3: In the Pre-Check, set the Module Manufacturer to custom. Then set the Drive Ratio to 1.36 and the Steer Ratio to 2.2.
The wheel radius is 1.0 inch, and the FL to FR distance is 11, same for FL to BL. (If you have a different Wheel Radius, distances, Drive Ratio, and Steer Ratio, set those)
Step 4: Complete the procedures and produce new tuner constants.
Step 5: Copy and paste the Tuner Constants into the TunerConstants.java file
Here's the documentation: https://docs.wpilib.org/en/stable/docs/zero-to-robot/step-3/radio-programming.html
Step 1: Download the utility app here
Step 2: Prepare your PC Network Settings: Disable Wi-Fi if possible, plug in your PC's Ethernet port directly into the Radio's LAN port
Step 3: Power the Radio
Step 4: Open the utility app and open the options
Choose your Radio model, choose your team number, select bridge or access point, and optionally set a password
Step 5: Click Load Firmware or Configure. The Utility will upload new firmware, apply your team number, and configure the network Settings.
What is a subsystem? A subsystem is a Java class that represents a physical part of your robot, like:
An Arm
A Shooter
A Drivetrain
An Elevator
Each subsystem controls hardware (motors, encoders, etc.) and provides methods to operate it, like setAngle(), setSpeed(), or moveToPosition(). They are used for keeping code clean, preventing motors from being controlled by multiple things at once, and encapsulating logic
How to code your own Subsystem:
Creating the subsystem class:
The first thing to do is to create a subsystem class; unlike creating a separate file in the folder, you want to right-click on it and select "Create new class/command" at the very bottom. Once you have clicked that, there should be a prompt at the top. At the very bottom, select the Subsystem class and name it to your preference.
Declaring your Hardware in the class:
The next step is to declare the hardware in your subsystem class, which looks like this if you are using NEO's:
The next step would be to initialize hardware in the constructor:
This would consist of setting up your motor ID, type, encoder conversions, and PID. I would recommend looking up your motor's code documentation before setting anything up, just so you know how to program your motors. The other thing is to download your vendor libraries. You can find your vendor libraries by going to the right side of VS Code and then going to the bottom, where you will see the WPILib logo. It would look something like this:
The next step would be to expose public methods: Commands will call these methods to control the motors. Directly exposing the SparkMax objects is avoided to maintain encapsulation.
Override periodic():
Although the provided file leaves periodic() empty, you can add periodic actions or telemetry publishing here. periodic() runs once each scheduler cycle (generally every 20 ms).
Additional Projects
Objective: Accurately estimate the position of an object on a field.
Requirements:
Camera capable of object detection (e.g., Limelight 4)
Object detection model (e.g., .hef model that can be found at https://docs.limelightvision.io/docs/resources/downloads)
Function that calculates pose of NERDSwerve
Dimensions of camera relative to center of robot
Dimensions of the object
In order to calculate the pose of the object we are detecting, we need the height of the camera (hc) and the height of the center of the object (ho) off the ground and the angle of depression (AOD) of the camera relative to the horizontal. Additionally, we need to be able to obtain the vertical offset (ty) of the object relative to the center of the camera’s FOV. Using trigonometry, we can calculate the horizontal distance between the centers of the camera and the object, which we will represent as dy. We can calculate dy as
In order to calculate dx, pictured above, we need dy (calculated before) and tx, the horizontal offset of the object relative to the center of the camera’s FOV. Using trigonometry, we can calculate dx with
Now, we must account for any offsets of the camera from the center of the robot. Here, tx is the angular displacement between the vertical axis of the coordinate system centered on the robot and the sight line of the camera, cx is the horizontal displacement from the center of the robot to the camera, and cy is the vertical displacement from the center of the robot to the camera. Let dx’ be the horizontal displacement from the center of the robot to the object and dy’ be the vertical displacement from the center of the robot to the object. Then
Now we will account for the pose of the robot on the field. We can follow a similar process using rx (x-coordinate), ry (y-coordinate), and tr (yaw). If ox and oy are the x- and y-coordinates of the object, respectively, we have that
Objective: Log detected objects accurately for future reference.
Requirements:
Detection of objects and ability to calculate pose of object (see Pose Estimation of Objects)
Ability to obtain class name of object
After calculating the pose of a detected object, it may be helpful to store the pose of the object in a list for future reference. This can help if one needs to drive to an object but doesn’t have time to drive to it or to turn in the direction of the object so that an auto drive algorithm that relies on vision can take over. In order to do this, you can create a class that represents the object being logged and a class that manages an array of the object class. In this example, we create two classes: CoralObject and CoralArrayManager.
The purpose of the object class is to be able to store the pose of a newly-detected object in an array. The advantage of creating an object class like the one above is that we can assign other attributes to it, such as the time it was detected, the distance it is from the robot, its orientation, and if it is being actively targeted. This is extremely helpful when it is necessary to filter out objects or choose one that is best for targeting.
--
We can create these filters by creating an array manager class for our array of detected objects. For example, in CoralArrayManager, we establish multiple functions that take a List<CoralObject> as an input, check a certain property for each CoralObject in the array, and remove CoralObjects that don't meet a certain criterion, functioning as a filter.
Above is an example of a filter that removes older corals from the list. The filter works by checking the heartbeat of each coral, which is the frame the coral was detected in. It then finds the difference between the coral's heartbeat and the current frame and compares it to what the difference would be for a coral that is, in this example, 5 seconds old. This checks if the logged coral is older than 5 seconds. If it is, the coral is flagged to be removed, and it is removed after all the corals have been checked. These filters can be called periodically to repeatedly filter the List<CoralObject> after a new coral is added to it, as seen below.
The function coralArrayUpdateReturn() takes the previously-declared List<CoralObject> corals, adds a newly-detected coral to it, and then runs all filters on it if robot isn't targeting a coral for pick-up. However, if the robot is targeting, it runs the selectCoral() function on the coral list declared in utils.CoralArrayManager, which selects the closest coral and filters out the rest of the corals from the list. The code the CoralObject, CoralArrayManager, and periodically-called function can be found at the links below:
CoralArrayManager: https://github.com/nerdspark/NERDSwerve/blob/develop_2025_TunerX_Prowl_DriveToCoral/src/main/java/frc/robot/util/CoralArrayManager.java
coralArrayUpdateReturn: https://github.com/nerdspark/NERDSwerve/blob/develop_2025_TunerX_Prowl_DriveToCoral/src/main/java/frc/robot/subsystems/PoseEstimatorSubsystem.java, lines 274-287
Objective: Select a detected object to drive to automatically and successfully do so.
Requirements:
Detection of objects and ability to calculate pose of object (see Pose Estimation of Objects)
Logging of detected objects
Auto-drive to an input pose (can be found here)
Using our logs of objects that were detected on the field, we can select an object and have the NERDSwerve drive to it without user guidance. For example, in the develop_2025_TunerX_Prowl_DriveToCoral, we use the selectCoral() function declared in CoralArrayManager to select the detected coral that is currently closest to the robot. This function clears the log of all other detected corals as well. This function is called in PoseEstimatorSubsystem when the boolean Constants.Vision.kCoralTargeted, which is false by default, becomes true. This also locks the log, preventing new corals from being added to the list.
Using triggers, we can assign a button on a controller that sets this boolean to true and executes a command which drives to an object in the list. Since selectCoral() removes all other corals in the list except for the selected one, the command can only choose the coral that was selected. This is set up as the command taking the first coral in the list as input, which is also the only coral in the list.
The trigger coralInRange is added to ensure that the closest coral is within some maximum distance, and the trigger coralInList ensures that the list is not empty. If the list was empty when the command was run, the code would crash due to the index 0 being out of the bounds of the list.
Link to the PathPlanner Docs: https://pathplanner.dev/home.html
What's included:
Installation
Configuring AutoBuilder and SendableChooser
How-to Robot Configuration
Registering and Using Commands
Command Groups
PathPlanner GUI to Files
Installation:
To get started with PathPlanner once you have a functioning robot, you will first need to download and install the FRC PathPlanner application. The latest releases of PathPlanner can be installed manually here (windows, mac, linux), or you can use this Microsoft Store download for PathPlanner with automatic updates here.
Once you've done that, you will need to add PathPlannerLib to your robot code. To do so, click the WPILib icon on the right upper corner, type “Manage Vendor Libraries” in the search bar above that shows up, select it, and then “install new libraries (online)”. Once doing that, type in the JSON file url shown on the PathPlanner Docs here.
Configuring AutoBuilder and SendableChooser:
Once you have everything installed, you’ll need to add some stuff to your code to allow paths/autos to be built and loaded to your dashboard. More extensive instructions can be found on the PathPlanner Docs, but demonstrated here is how these components have been implemented into the NERD Swerve code available on our Github repository in the CommandSwerveDrivetrain class.
First, we load the robotConfig from the GUI settings and then configure the AutoBuilder. While configuring the AutoBuilder, you must supply the current robot state and set drivetrain PID constants. If anything fails, the try catch structure throws an error. This configureAutoBuilder() method is called multiple times in the CommandSwerveDriveTrain class.
Also, you will need a sendableChooser in order to send your autos to your dashboard to be selected and run. Again, you will find more extensive instructions on the PathPlanner Docs, but this is how we implemented the code. In configureAutoChooser() we build the AutoChooser and put the data on to the smart dashboard titled “Auto chooser”. getAutonomousCommand() is called when auton is enabled and returns the selected auto.
How-to Robot Configuration: [Under Construction]
It's important to correctly configure your NERD Swerve in order for PathPlanner to accurately calculate key information. To access the Robot Config menu, do as follows:
There are a lot of values that need to be inputted accurately. Great information is listed on the PathPlanner Docs on how to do this. It is linked here.
Registering Commands:
Commands are helpful for running different subsystems at the same time as a path. Commands can be controlled to only occur at a certain point in the path or occur for a specified period of time. One thing about commands is that they cannot call the same subsystem at the same time. If you are driving a path and call the drivetrain at the same time, the code will crash and you will get errors. The same thing will happen if you have a parallel command group where you are calling a subsystem like an arm to do more than one command at a time. You can only make something do one thing at a time because teleportation isn’t possible (☹️).
Sometimes, you will want to have these commands happen at certain points in a path. You can do this by creating event markers. Once you create an event marker, you can adjust where it will show up in the path by the position slider/value. You can also change what the event marker will do by adding commands or sequences of commands in a command group.
In order to use these subsystems to do actions, you need to first create commands. Once you’ve created those commands in your code, you will need to register them. To register them, you need to configure them in your RobotContainer. Here’s how some example commands are set up to run in our current code. You must call NamedCommands.registerCommand(name of command, command to access).
After registering them in our current structure, you must call configureNamedCommands() in the class constructor RobotContainer(). After registering them, they will be available to be called in the PathPlanner GUI. The name of the registered command MUST be the exact same as what you type into the command in PathPlanner. After typing the name of the command, be sure to press the "enter" key so that the information is saved for that path/ auto.
Command Groups:
In PathPlanner, a command-based structure manages how everything is run. To run any auton routine, you'll need to build an auto. Everything in a PathPlanner auto is contained within a sequential group, meaning each element will run one after the other.
When building an auto, you have to add to the starting sequential group. You add follow paths, named commands, wait commands. You can also add command groups that can house different commands, paths, or even more groups!
Follow paths add one path to the sequential command group. Named commands run one of the created commands (for example, this could be controlling a separate subsystem like an arm). Wait commands wait for a specific amount of time before continuing in the sequential command group.
Command groups work a little bit differently by allowing for more flexibility and customization in the command structure. There are four command group options that PathPlanner allows, which are as follows.
Sequential Command Group-- runs each command sequentially, one after the other. This is useful when you want to run commands one at a time.
Parallel Command Group-- runs each command at the same time as the other during the command group, and exits the command group when all the commands inside of it have finished. This is helpful when you want to simultaneously run commands until everything is done.
Parallel Deadline Group-- runs like a parallel group, but it only exits the command group once the designated deadline command has completed. This is helpful when you want to run commands simultaneously until some condition has been met.
Parallel Race Group-- runs like a parallel group except that it exits the command group when the first finishing command is completed. This is helpful when you want to run multiple commands at once until a faster command has completed.
PathPlanner GUI to Files:
In PathPlanner, information entered in from the GUI is stored in a .path or .auto json files. The actual path will be represented by a series of x, y coordinates like this:
EventMarkers will be represented in these json files like so with information on position in the path as well as what the command or command group is actually referencing:
You will also be able to see the globalConstraints as well as the goalEndState, which you can input into the PathPlanner GUI:
Autos will show the type of command sequence (sequential, parallel, race) as well as the information that you put in there, such as your commands and paths:
💡 Pro tip: If there is a bunch of “null”, then you probably haven’t properly inputted stuff from the GUI. Make sure that you have pressed the “enter” key after inputting names and values. That’s how information is saved with PathPlanner. As well if you input an incorrect name for a command, PathPlanner won’t know what information to grab, and your command will not run because it won’t be stored in the generated json file.
If you're new and confused on how the PathPlanner GUI works, our team made a little slideshow tutorial on what the buttons do here.