OpenAI has launched a bug bounty initiative that will enable registered security researchers to receive compensation for detecting weaknesses in its product range and reporting them via the Bugcrowd crowdsourced security platform.
According to the company, the reward amount depends on the detected issue’s severity and impact. For minor security vulnerabilities, researchers may earn up to $200, while exceptional discoveries may be rewarded with up to $20,000.
OpenAI stated that this program acknowledges and appreciates the efforts of security researchers who contribute to safeguarding the security of its technology and organization.
However, even though the OpenAI Application Programming Interface (API) and the ChatGPT artificial-intelligence chatbot are eligible targets for bounty hunters, the company has requested that researchers report model issues via a separate form unless they have a security impact.
OpenAI explains that model safety issues are unsuitable for a bug bounty program since they are not singular, distinct bugs that can be resolved directly. Addressing such issues usually requires extensive research and a more comprehensive approach.
To ensure that these concerns are handled appropriately, we kindly request that you use the appropriate form to report them rather than submitting them through the bug bounty program. Submitting reports to the correct location will allow our researchers to utilize them to enhance the model.
Other issues that are not included in the program comprise jailbreaks and safety bypasses that ChatGPT users have been using to deceive the ChatGPT chatbot by circumventing the safeguards put in place by OpenAI engineers.
In the previous month, OpenAI publicized a ChatGPT payment data breach that the company attributed to a flaw in the Redis client open-source library bug utilized by its platform.
OpenAI has confirmed that the flaw was detected in the Redis client open-source library, redis-py. The company swiftly contacted Redis maintainers with a patch to address the issue once the bug was identified.
Although the company did not connect today’s announcement with this recent occurrence, the flaw could have been detected sooner. The data leak could have been prevented if OpenAI had previously implemented a bug bounty program to enable researchers to test its products for security vulnerabilities.
Related Stories: