- 20 Feb, 2024 1 commit
-
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
- 15 Feb, 2024 14 commits
-
-
Damien George authored
The Python BLE IRQ handler will most likely run on the NimBLE task, so its C stack must be large enough to accommodate reasonably complicated Python code (eg a few call depths). So increase this stack size. Also increase the headroom from 1024 to 2048 bytes. This is needed because (1) the esp32 architecture uses a fair amount of stack in general; and (2) by the time execution gets to setting the Python stack top via `mp_stack_set_top()` in this interlock code, about 600 bytes of stack are already used, which reduces the amount available for Python. Fixes issue #12349. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
In case callbacks must run (eg a disconnect event happens during the deinit) and the GIL must be obtained to run the callback. Fixes part of issue #12349. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Similar to the previous commit but for MP_BLUETOOTH_IRQ_GATTC_READ_DONE: the pending_value_handle needs to be reset before calling mp_bluetooth_gattc_on_read_write_status(), which will call the Python IRQ handler, which may in turn call back into BTstack to perform an action like a write. In that case the pending_value_handle will need to be available for the write/read/etc to proceed. Fixes issue #13634. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
The pending_value_handle needs to be freed and reset before calling mp_bluetooth_gattc_on_read_write_status(), which will call the Python IRQ handler, which may in turn call back into BTstack to perform an action like a write. In that case the pending_value_handle will need to be available for the write/read/etc to proceed. Fixes issue #13611. Signed-off-by: Damien George <damien@micropython.org>
-
Takeo Takahashi authored
Tested on Portenta C33 with AT24256B (addrsize=16) and SSD1306. Fixes issue #13280. Signed-off-by: Takeo Takahashi <takeo.takahashi.xv@renesas.com>
-
Damien George authored
If a return is executed within the try block of a try-finally then the return value is stored on the top of the Python stack during the execution of the finally block. In this case the Python stack is one larger than it normally would be in the finally block. Prior to this commit, the compiler was not taking this case into account and could have a Python stack overflow if the Python stack used by the finally block was more than that used elsewhere in the function. In such a scenario the last argument of the function would be clobbered by the top-most temporary value used in the deepest Python expression/statement. This commit fixes that case by making sure enough Python stack is allocated to the function. Fixes issue #13562. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
These separate drivers must share the DMA resource with each other. Fixes issue #13380. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Prior to this commit it would skip every second cipher returned from mbedtls. The corresponding test is also updated and now passes on esp32, rp2, stm32 and unix. Signed-off-by: Damien George <damien@micropython.org>
-
Kwabena W. Agyeman authored
Signed-off-by: Kwabena W. Agyeman <kwagyeman@live.com>
-
Damien George authored
Adds support to asyncio.gather() for the case that one or more (or all) sub-tasks finish and/or raise an exception before the gather starts. Signed-off-by: Damien George <damien@micropython.org>
-
iabdalkader authored
Switch the RTC clock source to Sub-clock (XCIN). This board has an accurate LSE crystal, and it should be used for the RTC clock source. Signed-off-by: iabdalkader <i.abdalkader@gmail.com>
-
iabdalkader authored
The SysTick_Config function must use the system/CPU clock to configure the ticks. Signed-off-by: iabdalkader <i.abdalkader@gmail.com>
-
robert-hh authored
Do not wait in the worst case up to the timeout. Fixes issue #13377. Signed-off-by: robert-hh <robert@hammelrath.com>
-
Nicko van Someren authored
Signed-off-by: Nicko van Someren <nicko@nicko.org>
-
- 05 Jan, 2024 4 commits
-
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Prior to this commit there is a potential deadlock in mp_thread_begin_atomic_section(), when obtaining the atomic_mutex, in the following situation: - main thread calls mp_thread_begin_atomic_section() (for whatever reason, doesn't matter) - the second core is running so the main thread grabs the mutex via the call mp_thread_mutex_lock(&atomic_mutex, 1), and this succeeds - before the main thread has a chance to run save_and_disable_interrupts() a USB IRQ comes in and the main thread jumps off to process this IRQ - that USB processing triggers a call to the dcd_event_handler() wrapper from commit bcbdee23 - that then calls mp_sched_schedule_node() - that then attempts to obtain the atomic section, calling mp_thread_begin_atomic_section() - that call then blocks trying to obtain atomic_mutex - core0 is now deadlocked on itself, because the main thread has the mutex but the IRQ handler (which preempted the main thread) is blocked waiting for the mutex, which will never be free The solution in this commit is to use mutex enter/exit functions that also atomically disable/restore interrupts. Fixes issues #12980 and #13288. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
These allow entering/exiting a mutex and also disabling/restoring interrupts, in an atomic way. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Using the multicore lockout feature in the general atomic section makes it much more difficult to get correct. Signed-off-by: Damien George <damien@micropython.org>
-
- 27 Dec, 2023 1 commit
-
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
- 22 Dec, 2023 6 commits
-
-
Daniël van de Giessen authored
Instead, configure the default once at compile-time. This means the GAP name will no longer be set to default after re-initializing Bluetooth. Signed-off-by: Daniël van de Giessen <daniel@dvdgiessen.nl>
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
Nicko van Someren authored
This commit implements fairly complete support for the DMA controller in the rp2 series of microcontrollers. It provides a class for accessing the DMA channels through a high-level, Pythonic interface, and functions for setting and manipulating the DMA channel configurations. Creating an instance of the rp2.DMA class claims one of the processor's DMA channels. A sensible, per-channel default value for the ctrl register can be fetched from the DMA.pack_ctrl() function, and the components of this register can be set via keyword arguments to pack_ctrl(). The read, write, count and ctrl attributes of the DMA class provide read/write access to the respective registers of the DMA controller. The config() method allows any or all of these values to be set simultaneously and adds a trigger keyword argument to allow the setup to immediately be triggered. The read and write attributes (or keywords in config()) accept either actual addresses or any object that supports the buffer interface. The active() method provides read/write control of the channel's activity, allowing the user to start and stop the channel and test if it is running. Standard MicroPython interrupt handlers are supported through the irq() method and the channel can be released either by deleting it and allowing it to be garbage-collected or with the explicit close() method. Direct, unfettered access to the DMA controllers registers is provided through a proxy memoryview() object returned by the DMA.registers attribute that maps directly onto the memory-mapped registers. This is necessary for more fine-grained control and is helpful for allowing chaining of DMA channels. As a simple example, using DMA to do a fast memory copy just needs: src = bytearray(32*1024) dest = bytearray(32*1024) dma = rp2.DMA() dma.config(read=src, write=dest, count=len(src) // 4, ctrl=dma.pack_ctrl(), trigger=True) # Wait for completion while dma.active(): pass This API aims to strike a balance between simplicity and comprehensiveness. Signed-off-by: Nicko van Someren <nicko@nicko.org> Signed-off-by: Damien George <damien@micropython.org>
-
Sebastian Romero authored
This allows to follow good practice and have libraries live in the lib folder which means they will be found by the runtime without adding this path manually at runtime. Signed-off-by: Sebastian Romero <s.romero@arduino.cc>
-
Peter Züger authored
When compiling with distcc, it does not understand the -MD flag on its own. This fixes the interaction by explicitly adding the -MF option. The error in distcc is described here under "Problems with gcc -MD": https://www.distcc.org/faq.htmlSigned-off-by: Peter Züger <zueger.peter@icloud.com>
-
Peter Züger authored
The calculation of the lfs2 cache_size was incorrect, the maximum allowed size is block_size. The cache size must be: "a multiple of the read and program sizes, and a factor of the block size". Signed-off-by: Peter Züger <zueger.peter@icloud.com>
-
- 21 Dec, 2023 7 commits
-
-
Maarten van der Schrieck authored
MicroPython code may rely on the return value of sys.stdout.buffer.write() to reflect the number of bytes actually written. While in most scenarios a write() operation is successful, there are cases where it fails, leading to data loss. This problem arises because, currently, write() merely returns the number of bytes it was supposed to write, without indication of failure. One scenario where write() might fail, is where USB is used and the receiving end doesn't read quickly enough to empty the receive buffer. In that case, write() on the MicroPython side can timeout, resulting in the loss of data without any indication, a behavior observed notably in communication between a Pi Pico as a client and a Linux host using the ACM driver. A complex issue arises with mp_hal_stdout_tx_strn() when it involves multiple outputs, such as USB, dupterm and hardware UART. The challenge is in handling cases where writing to one output is successful, but another fails, either fully or partially. This patch implements the following solution: mp_hal_stdout_tx_strn() attempts to write len bytes to all of the possible destinations for that data, and returns the minimum successful write length. The implementation of this is complicated by several factors: - multiple outputs may be enabled or disabled at compiled time - multiple outputs may be enabled or disabled at runtime - mp_os_dupterm_tx_strn() is one such output, optionally containing multiple additional outputs - each of these outputs may or may not be able to report success - each of these outputs may or may not be able to report partial writes As a result, there's no single strategy that fits all ports, necessitating unique logic for each instance of mp_hal_stdout_tx_strn(). Note that addressing sys.stdout.write() is more complex due to its data modification process ("cooked" output), and it remains unchanged in this patch. Developers who are concerned about accurate return values from write operations should use sys.stdout.buffer.write(). This patch might disrupt some existing code, but it's also expected to resolve issues, considering that the peculiar return value behavior of sys.stdout.buffer.write() is not well-documented and likely not widely known. Therefore, it's improbable that much existing code relies on the previous behavior. Signed-off-by: Maarten van der Schrieck <maarten@thingsconnected.nl>
-
Maarten van der Schrieck authored
In case of multiple outputs, the minimum successful write length is returned. In line with this, in case any output has a write error, zero is returned. In case of no outputs, -1 is returned. The return value can be used to assess whether writes were attempted, and if so, whether they succeeded. Signed-off-by: Maarten van der Schrieck <maarten@thingsconnected.nl>
-
Jim Mussared authored
This adds a `add_library(name, path)` method for use in manifest.py that allows registering an external path (e.g. to another repo) by name. This name can then be passed to `require("package", library="name")` to reference packages in that repo/library rather than micropython-lib. Within the external library, `require()` continues to work as normal (referencing micropython-lib) by default, but they can also specify the library name to require another package from that repo/library. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
-
IhorNehrutsa authored
Signed-off-by: IhorNehrutsa <Ihor.Nehrutsa@gmail.com>
-
IhorNehrutsa authored
This change was missd in e7ae3ad9. Signed-off-by: IhorNehrutsa <Ihor.Nehrutsa@gmail.com>
-
Jim Mussared authored
Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
-
Jim Mussared authored
The poll_obj_t instances have their pollfd field point into this allocation. So if re-allocating results in a move, we need to update the existing poll_obj_t's. Update the test to cover this case. Fixes issue #12887. This work was funded through GitHub Sponsors. Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
-
- 20 Dec, 2023 5 commits
-
-
Damien George authored
This adds support to stm32's mboot for the Microsoft WCID USB 0xee string and Compatible ID Feature Descriptor. This allows the USB device to automatically set the default USB driver, so that when the device is plugged in Windows will assign the winusb driver to it. This means that USB DFU mode can be used without installing any drivers. For example this page will work (allow the board to be updated over DFU) with zero install: https://devanlai.github.io/webdfu/dfu-util/ Tested on Windows 10, Windows can read the 0xee string correctly, and requests the second special descriptor, which then configures the USB device to use the winusb driver. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
It looks like these never worked and there are no tests for this functionality. Furthermore, CPython doesn't support this. Fixes #12995. Signed-off-by: Damien George <damien@micropython.org>
-
- 19 Dec, 2023 2 commits
-
-
Damien George authored
This gets back the old heap-size behaviour on ESP32, before auto-split-heap was introduced: after the heap is grown one time the size is 111936 bytes, with about 40k left for the IDF. That's enough to start WiFi and do a HTTPS request. Signed-off-by: Damien George <damien@micropython.org>
-
Damien George authored
There are two main changes here to improve the calculation of the size of the next heap area when automatically expanding the heap: - Compute the existing total size by counting the total number of GC blocks, and then using that to compute the corresponding number of bytes. - Round the bytes value up to the nearest multiple of BYTES_PER_BLOCK. This makes the calculation slightly simpler and more accurate, and makes sure that, in the case of growing from one area to two areas, the number of bytes allocated from the system for the second area is the same as the first. For example on esp32 with an initial area size of 65536 bytes, the subsequent allocation is also 65536 bytes. Previously it was a number that was not even a multiple of 2. Signed-off-by: Damien George <damien@micropython.org>
-