ESP32/8266 Arduino/PlatformIO library that painlessly enables incredibly fast re-connect to the previous wireless network after deep sleep.

WiFiQuick

ESP32/8266 Platformio/Arduino library that painlessly enables incredibly fast re-connect to the previous wireless network after deep sleep.

This library uses RTC RAM to store all the settings, so it will be saved during sleep as long as power is not lost. This method is much faster than the native Arduino one, which saves some of this information in a special segment of flash. Not only is this method faster, but it also eliminates those write cycles, saving flash wear and lowering power consumption. The biggest time saver is eliminating the need to do a network scan before connection. A smaller but still not insignificant amount of time is saved by saving the previously issued ip address, gateway, netmask, and dns server so that dhcp is not used for connections after the first time. The WiFiQuick.begin() has a default timout of 10 seconds, this can be changed by supplying a time in seconds as its final or only argument depending on how you choose to initiate the connection.

Installation

ArduinoIDE:

  • Clone or download and extract this repository into your Arduino libraries folder.

Platformio:

  • Install or add "winford/WiFiQuick" to your project using the pio library manager.
  • add to project platformio.ini file like:
lib_deps = 
	https://github.com/UncleGrumpy/WiFiQuick.git

Usage

A growing collection of samples are included. In ArduinoIDE they will show up in the usual menu:

File > Examples > WiFiQuick

The simplest usage looks like:

#include 
   
    

const char* ssid = "NETWORK_NAME";
const char* password = "PASSWORD";

WiFiQuick WiFiQuick;

void setup() {

  // Start connection...
  WiFiQuick.begin(ssid, password);

}

void loop() {

  // You can safely add RF_DISABLED as a second argument to leave WiFi off
  // when the ESP first wakes up. WiFiQuick will turn it back on before connecting.
  ESP.deepSleep(10e6);
}

   

There are two basic methods of starting a connection. WiFiQuick.begin() should work very much like you would expect. Using this function like in the example above will start a connection.

Method 1

WiFiQuick.begin(ssid, pass);

For Static IP it looks like:

WiFiQuick.begin(ssid, pass, IP, gateway, netmask, dns);

or for a longer timeout:

WiFiQuick.begin(ssid, pass, IP, gateway, netmask, dns, 30);  // 30 second timeout 

Method 2

You can start the connection, but not wait for it to finish. This might be useful if you are trying to optimize your run time. Just be sure to include delays to give time to the wifi to negotiate the connection, and avoid anything processor heavy. This is started with the init() method, but must still include the begin() function to make sure your settings are stored for faster re-connection next time.

WiFiQuick.init(SSID, PASSWORD);  // starts the connection

/* do some other stuff... */

WiFiQuick.begin();	// returns "true" if connection is successful
			// saves your netinfo for faster connection next time.

Or for static IP:

WiFiQuick.init(SSID, PASSWORD, IP, gateway, netmask, dns);  // starts the connection

/* do some other stuff... */

WiFiQuick.begin(30) // 30 second timeout.

WARNING

Make sure your board is set up to wake from DeepSleep! For example...

  • D1 Mini > connect D0 to RST.
  • ESP12-F > connect GPIO16 to RST
  • ESP01 > see here: https://blog.enbiso.com/post/esp-01-deep-sleep/ to make the necessary modifications.
    • For this modification I personally like to use conductive paint and a sharp needle to apply it...
Owner
Winford
Tinkerer. Linux Fanatic. Enthusiast of making and coding projects using low power devices like single board computers and microcontrollers.
Winford
Similar Resources

Forward - A library for high performance deep learning inference on NVIDIA GPUs

 Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Mar 17, 2021

A library for high performance deep learning inference on NVIDIA GPUs.

A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Aug 31, 2022

A header-only C++ library for deep neural networks

MiniDNN MiniDNN is a C++ library that implements a number of popular deep neural network (DNN) models. It has a mini codebase but is fully functional

Sep 26, 2022

A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Apr 5, 2022

TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

TensorRT Open Source Software This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. Included are the sources for Tens

Sep 30, 2022

TinNet - A compact C++17 based deep learning library.

[email protected] A compact DNN library. Build This project uses Bazel as a build system(1.0 or above required) and compiles with Clang(NOT required, automatic

Oct 12, 2020

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

Sep 26, 2022

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Sep 29, 2022

header only, dependency-free deep learning framework in C++14

header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

Sep 29, 2022
Comments
  • ESP32 not connecting as fast as it should.

    ESP32 not connecting as fast as it should.

    Tests with ESP32 lead me to believe that wifi persistence is not being completely disabled. The connection times are completely unaffected by setting WiFi.persistence() to true ore false. setting it to false should greatly increase connection time under normal circumstances, which it does not. changing the value to true or false makes no difference in connection times using this library or not, so there must be more required to fully disable it.

Jul 21, 2022
oneAPI Deep Neural Network Library (oneDNN)

oneAPI Deep Neural Network Library (oneDNN) This software was previously known as Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-

Sep 28, 2022
Benchmark framework of compute-in-memory based accelerators for deep neural network (inference engine focused)

DNN+NeuroSim V1.3 The DNN+NeuroSim framework was developed by Prof. Shimeng Yu's group (Georgia Institute of Technology). The model is made publicly a

Sep 27, 2022
Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution
Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution

DeepC: Implementing Deep Convolutional Neural Network in C without External Libraries for YUV video Super-Resolution This code uses FSRCNN algorithm t

Aug 14, 2022
ORB-SLAM3-Monodepth is an extended version of ORB-SLAM3 that utilizes a deep monocular depth estimation network
ORB-SLAM3-Monodepth is an extended version of ORB-SLAM3 that utilizes a deep monocular depth estimation network

ORB_SLAM3_Monodepth Introduction This repository was forked from [ORB-SLAM3] (https://github.com/UZ-SLAMLab/ORB_SLAM3). ORB-SLAM3-Monodepth is an exte

Sep 20, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Sep 25, 2022
Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and deploy without Python.
Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and deploy without Python.

Python Inference Script(PyIS) Python Inference Script is a Python package that enables developers to author machine learning workflows in Python and d

Aug 10, 2022
The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs based on CUDA.

dgSPARSE Library Introdution The dgSPARSE Library (Deep Graph Sparse Library) is a high performance library for sparse kernel acceleration on GPUs bas

Sep 30, 2022
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling
FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling

FG-Net: Fast Large-Scale LiDAR Point Clouds Understanding Network Leveraging Correlated Feature Mining and Geometric-Aware Modelling Comparisons of Running Time of Our Method with SOTA methods RandLA and KPConv:

Sep 22, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Sep 15, 2022