システムロジック教室_A1/random(1/1)

教師A
2024-10-22 20:43:19
チャンネル名を「ソーシャル」から「random」に変更しました
教師A
2024-11-02 21:04:36
思考メモ
  1. Formulaを使用してハイパーリンク関数を登録する
  2. 1に伴い、A列を文字列型にする
  3. A3セルの書式をクリアする場合、 Range("A3").ClearFormats 覚えておこう(削除した後に適用する)
  4. A3セルのリンク書式をdeleteする場合、Range("A3").Hyperlinks.Delete
教師A
2024-11-02 21:09:34
Range("A3").Hyperlinks.DeleteRange("A3").ClearFormats は、それぞれ異なる動作をするコマンドです。以下に詳細を説明します。
1. Range("A3").Hyperlinks.Delete
  • 概要: セルに設定された「ハイパーリンクそのもの」を削除します。
  • 影響: ハイパーリンクのリンク先情報とリンクの書式(青文字と下線)が削除されます。リンク先が削除されるため、リンクとして機能しなくなります。
  • 使用例: セルにリンクが含まれていて、リンク先やリンク形式そのものを削除したい場合に使用します。
Range("A3").Hyperlinks.Delete
2. Range("A3").ClearFormats
  • 概要: セルの「書式のみ」をクリアします。
  • 影響: セルのフォントスタイル、色、下線などの書式設定が初期化されますが、リンク先情報は保持されます。つまり、リンクの青文字や下線は消えますが、リンク自体は引き続き有効です。
  • 使用例: セルのリンク書式(青文字と下線)を通常の表示に戻したい場合に使用します。
Range("A3").ClearFormats
違いのまとめ コマンド: Hyperlinks.Delete 動作: ハイパーリンク情報を削除 ハイパーリンクの削除: はい 書式の削除: はい コマンド: ClearFormats 動作: 書式のみ削除 ハイパーリンクの削除: いいえ 書式の削除: はい ハイパーリンクの機能を削除したい場合は Hyperlinks.Delete、リンクの青文字や下線の書式のみを削除したい場合はClearFormatsが適しています。
教師A
2024-11-02 21:11:20
併用する必要性あり、
教師A
2024-11-02 21:12:00
原因はハイパーリンクの残存でハイパーリンク書式が残る→値が消える→ハイパーリンクが消えて書式が残る
教師A
2024-11-02 21:49:18
思考メモ2
With Range("A3") .Formula = .Value ' 数式を値に変換 End With
教師A
2024-11-06 07:35:04
失敗原因はオートフィルタで非表示になった部分にdeleteが干渉できないことでした
✅1
教師A
2024-11-10 14:08:16
BitLockerメモ
manage-bde.exe -protectors -get c: manage-bde.exe -protectors -get d:
image.png
教師A
2024-11-10 14:12:32
GoogleDriveの再起動バッチ
教師A
2024-11-10 14:14:03
教師A
2024-11-14 06:56:51
教師A
2024-11-14 06:57:03
挿入は失敗する
教師A
2024-11-14 07:16:00
Win検査バッチ
教師A
2024-11-14 22:05:09
VBAを使ってセル範囲 A4:Z4 に条件付き書式を設定し、セル B4 の値が「1~3」の場合に背景色を赤 (RGB(255, 0, 0)) にするコードを以下のように記述できます。
Sub SetConditionalFormatting() Dim ws As Worksheet Set ws = ThisWorkbook.Sheets("Sheet1") ' 適用したいシート名に変更してください ' A4:Z4の範囲を設定 With ws.Range("A4:Z4").FormatConditions .Delete ' 既存の条件付き書式を削除 ' 条件付き書式の追加 .Add Type:=xlExpression, Formula1:="=AND(B4>=1, B4<=3)" .Item(1).Interior.Color = RGB(255, 0, 0) ' 背景色を赤に設定 End With End Sub
説明
  • FormatConditions.Delete は既存の条件付き書式を削除します。
  • .Add Type:=xlExpression, Formula1:="=AND(B4>=1, B4<=3)" では B4 が1~3の範囲にある場合に適用する条件式を指定しています。
  • .Item(1).Interior.Color = RGB(255, 0, 0) で条件が成立したときの背景色を赤色に設定しています。
このマクロを実行すると、B4 の値が1~3の範囲に入ったときに A4:Z4 の背景色が赤に変わります。 Formula1の式にセルの位置を数値表記で指定したい場合、 Cells(row, column) のような形で動的に作成できます。例えば、セル B4 の位置を (4, 2) として指定したい場合、次のように書きます。
Sub SetConditionalFormatting() Dim ws As Worksheet Set ws = ThisWorkbook.Sheets("Sheet1") ' 適用したいシート名に変更してください Dim targetRow As Long Dim targetColumn As Long targetRow = 4 targetColumn = 2 ' A4:Z4の範囲を設定 With ws.Range("A4:Z4").FormatConditions .Delete ' 既存の条件付き書式を削除 ' 条件付き書式の追加 .Add Type:=xlExpression, Formula1:="=AND(" & ws.Cells(targetRow, targetColumn).Address(False, False) & ">=1," & ws.Cells(targetRow, targetColumn).Address(False, False) & "<=3)" .Item(1).Interior.Color = RGB(255, 0, 0) ' 背景色を赤に設定 End With End Sub
説明
  • ws.Cells(targetRow, targetColumn).Address(False, False) によってセルの位置が「B4」として返されるため、数値で指定した行・列のセルを式に組み込めます。
  • Address(False, False) を使うことで、絶対参照($B$4)ではなく相対参照(B4)でセルを指定しています。
このコードを実行すると、B4 の値が 1~3 の範囲内にある場合に A4:Z4 の背景色が赤に変わります。 確かに、Formula1にセルの位置を行列の数値指定で書くには、少し異なる形で文字列を構築する必要があります。以下のように数値で行・列を指定した上で、数式を組み立てることが可能です。
Sub SetConditionalFormatting() Dim ws As Worksheet Set ws = ThisWorkbook.Sheets("Sheet1") ' シート名を変更してください Dim targetRow As Long Dim targetColumn As Long targetRow = 4 targetColumn = 2 ' A4:Z4の範囲を設定 With ws.Range("A4:Z4").FormatConditions .Delete ' 既存の条件付き書式を削除 ' B4を(4, 2)として指定し、数式を組み立てる .Add Type:=xlExpression, Formula1:="=AND(" & ws.Cells(targetRow, targetColumn).Address(False, False) & ">=1," & ws.Cells(targetRow, targetColumn).Address(False, False) & "<=3)" .Item(1).Interior.Color = RGB(255, 0, 0) ' 背景色を赤に設定 End With End Sub
説明
  • ws.Cells(targetRow, targetColumn).Address(False, False)によって B4のセルを取得し、Formula1 に組み込みます。
  • .Address(False, False) でセル参照を相対参照(B4)にしています。
このコードで B4 の値が 1~3 の範囲にあるとき、A4:Z4 の背景色が赤に変わります。 .Address(False, False)は、指定したセルの「相対参照形式」のアドレス(セルの位置)を取得するためのメソッドです。この .Addressメソッドには次のようなオプションがあります。
.Cells(row, column).Address(RowAbsolute, ColumnAbsolute)
  • Cells(4, 2).Address(True, True) → 絶対参照で $B$4
  • Cells(4, 2).Address(False, False) → 相対参照で B4
  • Cells(4, 2).Address(True, False) → 列のみ相対参照で $B4
  • Cells(4, 2).Address(False, True) → 行のみ相対参照で B$4
VBAの条件付き書式などで相対参照を使いたい場合に便利です。
✅1
教師A
2024-11-26 07:42:29
Redmine???失敗
from bs4 import BeautifulSoup import requests TOKEN_NAME = "authenticity_token" GET_LOGIN_URL = "https://my.redmine.jp/demo/login" POST_LOGIN_URL = "https://my.redmine.jp/demo/login" USERNAME = "reporter" PASSWORD = "reporter" def post_login(url, parameter={}, cookies={}): response = requests.post(url, json=parameter, cookies=cookies, allow_redirects=False) print(cookies) cookies = {c.name:c.value for c in response.cookies} print(response.text) print(response.status_code) def get_login(url): response = requests.get(url, allow_redirects=False) cookies = {str(c.name):str(c.value) for c in response.cookies} soup = BeautifulSoup(response.text, "html.parser") input_target = soup.find_all("input", attrs={"name": TOKEN_NAME, "type": "hidden"}) input_target_value = None if len(input_target) > 0 and input_target[0].has_attr("value"): input_target_value = input_target[0]["value"] return (input_target_value, response.text, cookies) if "__main__" == __name__: data = get_login(url=GET_LOGIN_URL) if data[0] is not None: parameter = { TOKEN_NAME : data[1], "username" : USERNAME, "password" : PASSWORD, "back_url" : "/demo/projects/demo", "login" : "ログイン", } post_login(url=POST_LOGIN_URL, parameter=parameter, cookies=data[2])
教師A
2024-11-26 07:43:35
教師A
2024-11-26 22:02:44
Pythonでのアクセスに成功(Redmine)
from bs4 import BeautifulSoup import requests HTML_TOKEN_NAME = "csrf-token" POST_TOKEN_NAME = "authenticity_token" GET_LOGIN_URL = "https://my.redmine.jp/demo/login" POST_LOGIN_URL = "https://my.redmine.jp/demo/login" USERNAME = "reporter" PASSWORD = "reporter" def post_login(url, parameter={}, cookies={}): response = requests.post(url, json=parameter, cookies=cookies, allow_redirects=False) print("RequestHeader(Step2):", response.request.headers) print("ResponseHeader(Step2):", response.headers) print("StatusCode(Step2):", response.status_code) cookies = {c.name:c.value for c in response.cookies} def get_login(url): response = requests.get(url, allow_redirects=False) print("RequestHeader(Step1):", response.request.headers) print("ResponseHeader(Step1):", response.headers) print("StatusCode(Step1):", response.status_code) cookies = {str(c.name):str(c.value) for c in response.cookies} soup = BeautifulSoup(response.text, "html.parser") input_target = soup.find_all("meta", attrs={"name": HTML_TOKEN_NAME}) input_target_value = None if len(input_target) > 0 and input_target[0].has_attr("content"): input_target_value = input_target[0]["content"] return (input_target_value, response.text, cookies) if "__main__" == __name__: data = get_login(url=GET_LOGIN_URL) print("Step1:", data[0]) if data[0] is not None: parameter = { POST_TOKEN_NAME : data[0], "username" : USERNAME, "password" : PASSWORD, "back_url" : "/demo/projects/demo", "login" : "ログイン", } post_login(url=POST_LOGIN_URL, parameter=parameter, cookies=data[2])
教師A
2024-11-26 22:03:18
出力
RequestHeader(Step1): {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate, br', 'Accept': '*/*', 'Connection': 'keep-alive'} ResponseHeader(Step1): {'Date': 'Tue, 26 Nov 2024 13:01:41 GMT', 'Server': 'Apache', 'cache-control': 'max-age=0, private, must-revalidate', 'vary': 'Accept,Accept-Encoding', 'referrer-policy': 'strict-origin-when-cross-origin', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'x-request-id': 'e89225cb-8a6f-4f5a-96b7-5ecc29522035', 'x-download-options': 'noopen', 'etag': 'W/"d186e1137efebcecd03dbed76f187228"', 'x-frame-options': 'SAMEORIGIN', 'x-content-type-options': 'nosniff', 'Status': '200 OK', 'Content-Type': 'text/html; charset=utf-8', 'Set-Cookie': '_redmine_session=a24zYWkrcER1dlE0cWdvYmtXakp3Wmh5MVV3UHNmeXN0T1lzUnI0WUhwTHl2MEpSSjQwRURQTWVoQm9mNzVxOTNTdjBmZ2dRMjdXWmZpQTJad0NxZHAvMFZOREY5b0dLY3h5amdCRllqeTdoVy83cHNGMnNSRGNFTk11VGdRbTBIOXJ5OEhiNWh3N0JheWJ0ZUZlOG1vWkdKU2NJUkZjTDBIT2NVZ25rSjVnZGpnMTJWV1phcVRNM3FYaGZoOEdJLS1lTXIwN1crdVlCeHYwR25Udml1QUNnPT0%3D--2ac961a36e84a103e4e10b6f922b512fa42c9f8e; path=/demo; httponly', 'X-Robots-Tag': 'noindex,nofollow,noarchive', 'Content-Encoding': 'gzip', 'Content-Length': '4006', 'Keep-Alive': 'timeout=4, max=100', 'Connection': 'Keep-Alive'} StatusCode(Step1): 200 Step1: -RtbqaQr5NtAUXTJ0fdxi9JENT5gcnFW9-DmBkXPGOWwgN8RuT0x43D9G_dXwAxBCQfl3H4oss_u8Udxi8ZeGA RequestHeader(Step2): {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate, br', 'Accept': '*/*', 'Connection': 'keep-alive', 'Cookie': '_redmine_session=a24zYWkrcER1dlE0cWdvYmtXakp3Wmh5MVV3UHNmeXN0T1lzUnI0WUhwTHl2MEpSSjQwRURQTWVoQm9mNzVxOTNTdjBmZ2dRMjdXWmZpQTJad0NxZHAvMFZOREY5b0dLY3h5amdCRllqeTdoVy83cHNGMnNSRGNFTk11VGdRbTBIOXJ5OEhiNWh3N0JheWJ0ZUZlOG1vWkdKU2NJUkZjTDBIT2NVZ25rSjVnZGpnMTJWV1phcVRNM3FYaGZoOEdJLS1lTXIwN1crdVlCeHYwR25Udml1QUNnPT0%3D--2ac961a36e84a103e4e10b6f922b512fa42c9f8e', 'Content-Length': '232', 'Content-Type': 'application/json'} ResponseHeader(Step2): {'Date': 'Tue, 26 Nov 2024 13:01:41 GMT', 'Server': 'Apache', 'cache-control': 'no-cache', 'referrer-policy': 'strict-origin-when-cross-origin', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'x-request-id': 'a128d7e0-5a67-4ff3-a4d4-6d614b35a1a9', 'x-download-options': 'noopen', 'x-frame-options': 'SAMEORIGIN', 'x-content-type-options': 'nosniff', 'location': 'https://my.redmine.jp/demo/projects/demo', 'Status': '302 Found', 'Content-Type': 'text/html; charset=utf-8', 'Set-Cookie': '_redmine_session=eEtRRTh0WEVKdk13ejV2T2VubzlmSDBCR3RETGI1N1Y2Wm1IdEZ3SUMrTWVRT3JrVm43SGVCL1kwZjR2QVBQOHR0TGJIbmVVQUxiOFZ6R3F1MnhpWkFwU1FWY2d6d0VJVTBTWE1DcXFLbFB3dzJXWVByVm8zbzRaRkozRkQ0ZWxtN2xydE5FdmNzQzdLRlo3NzZmdWZTUExvODdBZml5UGpBVFFQSDVGL1pMZWpZRVByUTdQVVl3dlI4am83ZktZZFRXYmxaSjlLVUJxRmVyR0N1SHBpNEFGdnRxem5FcXVUMzM4aFJrSXZxWT0tLTZCY3ZqQzZlTmQ3ZExBZGxpcHFpdVE9PQ%3D%3D--c4aaa35749baf1f8a68909df91a82af8b2f83970; path=/demo; httponly', 'X-Robots-Tag': 'noindex,nofollow,noarchive', 'Vary': 'Accept-Encoding', 'Content-Encoding': 'gzip', 'Content-Length': '20', 'Keep-Alive': 'timeout=4, max=100', 'Connection': 'Keep-Alive'} StatusCode(Step2): 302
教師A
2024-11-26 22:04:07
つまり原因は以下のどれかまたは複合
・パーセントエンコーディングのミス
・POSTパラメータのミス
・Cookieのミス
教師A
2024-11-27 22:02:28
検証BASIC
教師A
2024-11-27 22:02:45
Sub SendRequest() Dim xmlhttp As Object Dim url As String Dim response As String, responseText As String Dim csrfToken As String Dim cookies As String Dim postData As String Dim buffer As Object ' URL設定 Dim GET_LOGIN_URL As String Dim POST_LOGIN_URL As String GET_LOGIN_URL = "https://my.redmine.jp/demo/login" POST_LOGIN_URL = "https://my.redmine.jp/demo/login" ' CSRFトークン取得(GETリクエスト) Set buffer = GetLogin(GET_LOGIN_URL) response = buffer.getAllResponseHeaders() responseText = buffer.responseText msgbox response ' CSRFトークンとCookieを解析 csrfToken = ParseMetaTag(responseText, "csrf-token") cookies = ParseCookies(response) If csrfToken <> "" Then ' POSTリクエストパラメータ設定 postData = "authenticity_token=" & csrfToken & _ "&username=reporter" & _ "&password=reporter" & _ "&back_url=%2Fdemo%2Fprojects%2Fdemo" & _ "&login=ログイン" ' ログイン試行(POSTリクエスト) response = PostLogin(POST_LOGIN_URL, postData, cookies) msgbox response End If End Sub Function GetLogin(url As String) As Object Dim xmlhttp As Object Set xmlhttp = CreateObject("MSXML2.ServerXMLHTTP") ' GETリクエスト送信 xmlhttp.Open "GET", url, False xmlhttp.Send ' レスポンスを返す GetLogin = xmlhttp End Function Function PostLogin(url As String, postData As String, cookies As String) As String Dim xmlhttp As Object Set xmlhttp = CreateObject("MSXML2.ServerXMLHTTP") ' POSTリクエスト送信 xmlhttp.Open "POST", url, False xmlhttp.setRequestHeader "Content-Type", "application/x-www-form-urlencoded" if cookies <> "" then xmlhttp.setRequestHeader "Cookie", cookies end if xmlhttp.Send postData ' レスポンスを返す PostLogin = xmlhttp.responseText PostLogin = xmlhttp.getAllResponseHeaders() End Function Function ParseMetaTag(html As String, name As String) As String Dim startPos As Long Dim endPos As Long Dim metaTag As String ' metaタグを検索 startPos = InStr(html, "<meta name=""" & name & """ content=""") If startPos > 0 Then startPos = startPos + Len("<meta name=""" & name & """ content=""") endPos = InStr(startPos, html, """") metaTag = Mid(html, startPos, endPos - startPos) ParseMetaTag = metaTag Else ParseMetaTag = "" End If End Function Function ParseCookies(response As String) As String Dim startPos As Long Dim endPos As Long Dim cookies As String Dim cookie As String cookies = "" ' Set-Cookieヘッダを検索してCookieを取得 startPos = InStr(1, response, "Set-Cookie:") Do While startPos > 0 startPos = startPos + Len("Set-Cookie:") endPos = InStr(startPos, response, vbCrLf) cookie = Mid(response, startPos, endPos - startPos) If Len(cookies) > 0 Then cookies = cookies & "; " & Trim(cookie) Else cookies = Trim(cookie) End If startPos = InStr(startPos, response, "Set-Cookie:") Loop ParseCookies = cookies End Function
教師A
2024-11-27 22:03:05
検証低レベルpython <成功>
教師A
2024-11-27 22:03:12
import socket import ssl import urllib.parse HTML_TOKEN_NAME = "csrf-token" POST_TOKEN_NAME = "authenticity_token" GET_LOGIN_URL = "https://my.redmine.jp/demo/login" POST_LOGIN_URL = "https://my.redmine.jp/demo/login" USERNAME = "reporter" PASSWORD = "reporter" def send_request(host, port, request): """ソケットを使ってリクエストを送信し、レスポンスを受信""" sock = socket.create_connection((host, port)) sock = ssl.wrap_socket(sock) # HTTPS通信対応 sock.sendall(request.encode("utf-8")) # レスポンスを受信 response = b"" while True: data = sock.recv(4096) if not data: break response += data sock.close() return response.decode("utf-8") def parse_headers_and_body(response): """HTTPレスポンスをヘッダーとボディに分割""" header_end = response.find("\r\n\r\n") headers = response[:header_end] body = response[header_end + 4:] return headers, body def parse_meta_tag(html, name): """HTML内から指定したmetaタグのcontent属性を抽出""" meta_start = html.find(f'<meta name="{name}" content="') if meta_start == -1: return None content_start = meta_start + len(f'<meta name="{name}" content="') content_end = html.find('"', content_start) return html[content_start:content_end] def parse_cookies(headers): """レスポンスヘッダーからCookieを抽出""" cookies = {} for line in headers.split("\r\n"): if line.lower().startswith("set-cookie:"): parts = line.split(": ", 1)[1].split(";")[0].split("=") if len(parts) == 2: cookies[parts[0].strip()] = parts[1].strip() return cookies def manual_urlencode(params): """辞書形式のパラメータをURLエンコードする関数""" encoded_str = "" for key, value in params.items(): if encoded_str: encoded_str += "&" # パラメータ同士を&で区切る encoded_str += manual_quote(str(key)) + "=" + manual_quote(str(value)) return encoded_str def manual_quote(value): """URLエンコードを手動で実装""" safe_chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-_.~" result = "" for char in value: if char in safe_chars: result += char else: result += "%" + format(ord(char), "02X") return result def get_login(url): """GETリクエストを送信してCSRFトークンとCookieを取得""" parsed_url = urllib.parse.urlparse(url) host, path = parsed_url.netloc, parsed_url.path request = f"GET {path} HTTP/1.1\r\nHost: {host}\r\nConnection: close\r\n\r\n" response = send_request(host, 443, request) headers, body = parse_headers_and_body(response) cookies = parse_cookies(headers) csrf_token = parse_meta_tag(body, HTML_TOKEN_NAME) return csrf_token, body, cookies def post_login(url, parameter={}, cookies={}): """POSTリクエストを送信してログイン""" parsed_url = urllib.parse.urlparse(url) host, path = parsed_url.netloc, parsed_url.path encoded_params = manual_urlencode(parameter) cookie_header = "; ".join(f"{key}={value}" for key, value in cookies.items()) request = ( f"POST {path} HTTP/1.1\r\n" f"Host: {host}\r\n" f"Content-Type: application/x-www-form-urlencoded\r\n" f"Content-Length: {len(encoded_params)}\r\n" f"Cookie: {cookie_header}\r\n" f"Connection: close\r\n\r\n" f"{encoded_params}" ) response = send_request(host, 443, request) headers, body = parse_headers_and_body(response) print("-"*60) print("RequestHeader(Step2): ■") print(request) print("-"*60) print("ResponseHeader(Step2): ■") print(headers) print("-"*60) print("StatusCode(Step2):", headers.split("\r\n")[0]) # ステータスライン if __name__ == "__main__": # GETリクエストでトークン取得 data = get_login(url=GET_LOGIN_URL) print("-"*60) print("Step1 Token:", data[0]) if data[0] is not None: # POSTリクエストでログイン parameters = { POST_TOKEN_NAME: data[0], "username": USERNAME, "password": PASSWORD, "back_url": "/demo/projects/demo", "login": "ログイン", } post_login(url=POST_LOGIN_URL, parameter=parameters, cookies=data[2])
教師A
2024-11-27 22:03:51
C:\Users\tttak>C:\Users\tttak\Desktop\l2.txt.py C:\Users\tttak\Desktop\l2.txt.py:15: DeprecationWarning: ssl.wrap_socket() is deprecated, use SSLContext.wrap_socket() sock = ssl.wrap_socket(sock) # HTTPS通信対応 ------------------------------------------------------------ Step1 Token: ceJ9TvmKjfQUeyV8lWe0n3TVzitETbRwm_ah3vS8n3y_00KvYqAMoRo26y0lAupchXkjHJUXERw0UhRI-66adw ------------------------------------------------------------ RequestHeader(Step2): ■ POST /demo/login HTTP/1.1 Host: my.redmine.jp Content-Type: application/x-www-form-urlencoded Content-Length: 203 Cookie: _redmine_session=ZUFUMVJIYklYYTZ3dno0ZGVMVHAzdldXTHU3U0NyUWhNSy9udjNCa21rRHpXYkNoL2gyYkhFa1NSNFowVG9JOGFhZ1BrRHdTMHRBdS9Mck1nN2Vtd1JxL3R4TllOVmo2RG5XZ25BSHhmSmNGblYvTnFPRWZpU2NXcVJLbStFYmhmdDlnTnZTejFtUVFydzdUS3c3SVI5Ky8zWG9LbmpIU1lBT2dYZWpOMDdqcHdMd1JVYzA3cWlVOTdxeXU1eU9yLS1XWEQ5c3Zmb1ZKWGRXa2o2R2lPZ2hBPT0%3D--b22ecbf71ed9cb15b602413321738bdc25e6c056 Connection: close authenticity_token=ceJ9TvmKjfQUeyV8lWe0n3TVzitETbRwm_ah3vS8n3y_00KvYqAMoRo26y0lAupchXkjHJUXERw0UhRI-66adw&username=reporter&password=reporter&back_url=%2Fdemo%2Fprojects%2Fdemo&login=%30ED%30B0%30A4%30F3 ------------------------------------------------------------ ResponseHeader(Step2): ■ HTTP/1.1 302 Found Date: Wed, 27 Nov 2024 13:03:40 GMT Server: Apache cache-control: no-cache referrer-policy: strict-origin-when-cross-origin x-permitted-cross-domain-policies: none x-xss-protection: 1; mode=block x-request-id: bdbb6384-c38d-43f9-9049-bcda5642fd3b x-download-options: noopen x-frame-options: SAMEORIGIN x-content-type-options: nosniff location: https://my.redmine.jp/demo/projects/demo content-length: 0 Status: 302 Found Content-Type: text/html; charset=utf-8 Set-Cookie: _redmine_session=SEIwVDgzeEZobmdFRGZ3Y1hXcGZUdWRRRDE4STRmWFdDcElEVGdUK1RXM3Vld1R0cVFvVDhHYjZIaE1aR3BIZUk3Vi9XdWZVd1o1dzF3T3hsRGFGWnpiUWQ5NUliUDRqMi9DMkE1KzBBZWFRZStGdUVxNWkrOHBTakdpS1YwdVRUZytjMlp5Q0J6MkJiSHp6ZkFzQXVnSTI5S0pySERWYnNseWZuTVNnamxRVHJaZlUyWTYzM1YzYzl3dGRrdnVadzdWd3FteHg4YXNNelJBMUZpR1RNK244MUIwZkFTaWptSzh5UytVTmYrQT0tLTloeVVhK1lPSWNxM25QSHYrc0tUeFE9PQ%3D%3D--5bae0bc4886df499a08e492a088bd8d13d7fd35d; path=/demo; httponly X-Robots-Tag: noindex,nofollow,noarchive Vary: Accept-Encoding Connection: close ------------------------------------------------------------ StatusCode(Step2): HTTP/1.1 302 Found C:\Users\tttak>
教師A
2024-12-22 15:01:15
セパレーティング文字コード
Chr(31)
教師A
2024-12-22 22:45:30
頭が回らないので、入力系を書いておく
Function GetFilledCellIndexes() As Variant Dim cell As Range Dim dict As Object Dim key As Variant Dim sortedKeys() As Variant Dim i As Integer ' Dictionaryの作成 Set dict = CreateObject("Scripting.Dictionary") ' A1:A10の範囲をループ For Each cell In Range("A1:A10") If Not IsEmpty(cell.Value) Then ' 値があるセルを辞書に格納 (key=値, value=インデックス) dict(cell.Value) = cell.Row End If Next cell ' 辞書のキーを昇順にソート sortedKeys = SortDictionaryKeys(dict) ' 結果としてインデックスを返す Dim result() As Integer ReDim result(1 To dict.Count) For i = 1 To dict.Count result(i) = dict(sortedKeys(i - 1)) Next i ' 結果を返す GetFilledCellIndexes = result End Function ' 辞書のキーを昇順にソートする関数 Function SortDictionaryKeys(dict As Object) As Variant Dim key As Variant Dim keys() As Variant Dim i As Integer, j As Integer Dim temp As Variant ' 辞書のキーを配列に格納 keys = dict.Keys ' バブルソートで昇順にソート For i = LBound(keys) To UBound(keys) - 1 For j = i + 1 To UBound(keys) If keys(i) > keys(j) Then temp = keys(i) keys(i) = keys(j) keys(j) = temp End If Next j Next i ' ソートされたキーを返す SortDictionaryKeys = keys End Function
教師A
2024-12-22 23:24:50
教師A
2025-01-10 22:15:43
<01/13>
・議事録
・QA集計マクロ(キーワード・アンチキーワードを正規表現対応、パス欄を追加(影響範囲大))
・(Talendで)→CSV
教師A
2025-02-13 07:20:38
エンコード的な?
import codecs def decode_hex_in_all_encodings(hex_input): # 16進数バイナリをデコードするために、最初にバイナリデータに変換 byte_data = bytes.fromhex(hex_input) # すべての標準文字エンコーディングを試す encodings = ["ascii", "big5", "big5hkscs", "cp037", "cp273", "cp424", "cp437", "cp500", "cp720", "cp737", "cp775", "cp850", "cp852", "cp855", "cp856", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863", "cp864", "cp865", "cp866", "cp869", "cp874", "cp875", "cp932", "cp949", "cp950", "cp1006", "cp1026", "cp1125", "cp1140", "cp1250", "cp1251", "cp1252", "cp1253", "cp1254", "cp1255", "cp1256", "cp1257", "cp1258", "euc_jp", "euc_jis_2004", "euc_jisx0213", "euc_kr", "gb2312", "gbk", "gb18030", "hz", "iso2022_jp", "iso2022_jp_1", "iso2022_jp_2", "iso2022_jp_2004", "iso2022_jp_3", "iso2022_jp_ext", "iso2022_kr", "latin_1", "iso8859_2", "iso8859_3", "iso8859_4", "iso8859_5", "iso8859_6", "iso8859_7", "iso8859_8", "iso8859_9", "iso8859_10", "iso8859_11", "iso8859_13", "iso8859_14", "iso8859_15", "iso8859_16", "johab", "koi8_r", "koi8_t", "koi8_u", "kz1048", "mac_cyrillic", "mac_greek", "mac_iceland", "mac_latin2", "mac_roman", "mac_turkish", "ptcp154", "shift_jis", "shift_jis_2004", "shift_jisx0213", "utf_32", "utf_32_be", "utf_32_le", "utf_16", "utf_16_be", "utf_16_le", "utf_7", "utf_8", "utf_8_sig", ] decoded_strings = {} for encoding in encodings: try: # エンコードを試みる decoded_strings[encoding] = byte_data.decode(encoding) except (UnicodeDecodeError, TypeError): # デコードに失敗した場合は無視 decoded_strings[encoding] = None return decoded_strings
教師A
2025-02-13 07:21:23
教師A
2025-02-13 07:23:14
教師A
2025-02-14 07:33:40
DATA = """ """.strip() """ """ import json, os if "__main__" == __name__: data = [d.split("\t") for d in DATA.split("\n")] r_dict = dict() for d in data: print(d) k = tuple(d[0].split("-")) if len(k) != 2: raise KeyError("A1: ", k) if k in r_dict: raise KeyError("A2: ", k) r_dict["/".join(k)] = {"Unicode": d[1], "Name": d[2], "Category":"/".join(d[3:]).lstrip("# ")} print(r_dict) with open("out.json", "w", encoding="utf-8") as f: f.write(json.dumps(r_dict))
教師A
2025-02-24 17:54:53
2/25日用(通常task終了後に試す)
教師A
2025-03-02 23:45:23
教師A
2025-03-02 23:53:13
import os, csv, html, re, traceback INPUT_ERROR_LOG_CSV_DIRECTORY_PATH = r"C:\Users\tttak\Desktop\BAF\in" INPUT_ERROR_LOGS_MERGE_ROW_NUMBER = 0 #合致行数欄 INPUT_ERROR_LOGS_ERROR_CODE_NUMBER = None #制限を加えるための禁止行の根拠 BAN_TARGET_ERROR_CODE_SET = {} #制限を加えるための禁止事項の一覧 INPUT_ERROR_LOGS_ACTION_ROW_NAME_ROW = 3 #名称欄 INPUT_SOURCE_CSV_DIRECTORY_PATH = r"C:\Users\tttak\Desktop\BAF\opt" OUTPUT_MERGED_ERROR_LOG_LIST = r"C:\Users\tttak\Desktop\BAF\out" REPLACE_TABLE_NAME_DICT = {"main":"主",} BASE_CSS_TEXT = """ table { border-collapse: collapse; margin-top: 0px; } .title { margin-top: 0.1em; margin-bottom: 0.1em; } th, td { border: 1px solid #000000; padding: 3px; } th { text-align: center; } td { text-align: left; } thead { position: sticky; top: 0; color: #000000; background: #dfdfdf; border-bottom: 3px double #000000; } tbody { overflow-x: hidden; overflow-y: scroll; height: 100px; } td.none_col { background-color: #999999; user-select: none; text-align: center; } .red_mark { color: 550000; background-color: ffcccc; } """.strip() def read_csv_file(csv_path, encoding="utf-8"): header, data = None, None with open(csv_path, mode="r", encoding=encoding, newline="") as file: reader = csv.reader(file) header = next(reader, []) # ヘッダーを取得 data = [row for row in reader] # データをリストとして取得 return (header, data) def write_csv_file(csv_path, data, header=None, encoding="utf-8"): os.makedirs(os.path.dirname(csv_path), exist_ok=True) # ディレクトリが存在しない場合は作成 with open(csv_path, mode="w", encoding=encoding, newline="") as file: writer = csv.writer(file) if header is not None: writer.writerow(header) # ヘッダーを書き込む writer.writerows(data) # データを書き込む def write_html_file(html_path, data, header=None, red_marker_col_list=None, title=None, css_text=None, join_col_number=None): os.makedirs(os.path.dirname(html_path), exist_ok=True) # ディレクトリが存在しない場合は作成 with open(html_path, mode="w", encoding="utf-8-sig") as file: file.write("<html><head>{0}{1}{2}</head><body>".format( "" if title is None else "<title>{0}</title>".format(title), "" if css_text is None else "<style>{0}</style>".format(css_text), "<style>.col_num_{0} {{ border-right: 3px double #000000; }}</style>".format(join_col_number) if join_col_number is not None else "" )) if title is not None and type(title) == str: file.write("<h3 class=\"title\">{0}</h3><hr>".format(html.escape(str(title)))) file.write("<table>") max_width = max([len(m) for m in data]) if len(data) > 0 else 0 max_width = max_width if header is None or max_width > len(header) else len(header) if header is not None: buffer_row = list() for i in range(max_width): if i < len(header) and header[i] is not None: buffer_row.append("<th class=\"normal_col col_num_{1}\">{0}</th>".format(html.escape(str(header[i])), i+1)) else: buffer_row.append("<th class=\"none_col col_num_{0}\">(Null)</th>".format(i+1)) file.write("<thead><tr>{0}</tr></thead>".format("".join(buffer_row))) red_marker_col_list = [None for _ in data] if red_marker_col_list is None or len(data) != len(red_marker_col_list) else red_marker_col_list for d, rmc in zip(data, red_marker_col_list): buffer_row = list() for i in range(max_width): if i < len(d) and d[i] is not None: buffer_row.append("<td class=\"normal_col col_num_{1}{2}\">{0}</td>".format(html.escape(str(d[i])), i+1, " red_mark " if rmc is not None and rmc == i else "")) else: buffer_row.append("<td class=\"none_col col_num_{0}{1}\"></td>".format(i+1, " red_mark " if rmc is not None and rmc == i else "")) file.write("<tr>{0}</tr>".format("".join(buffer_row))) file.write("</table></body></html>") def replace_name(raw_name, replace_dict=REPLACE_TABLE_NAME_DICT): return raw_name if raw_name not in replace_dict else replace_dict[raw_name] if "__main__" == __name__: csv_name_list = [file for file in os.listdir(INPUT_ERROR_LOG_CSV_DIRECTORY_PATH) if file.endswith(".csv")] for csv_name_one in csv_name_list: print("● {0}".format(csv_name_one)) error_csv_file_path = os.path.join(INPUT_ERROR_LOG_CSV_DIRECTORY_PATH, csv_name_one) source_csv_file_path = os.path.join(INPUT_SOURCE_CSV_DIRECTORY_PATH, csv_name_one) output_csv_file_path = os.path.join(OUTPUT_MERGED_ERROR_LOG_LIST, "{0}.{1}".format(replace_name(csv_name_one.rstrip(".csv")), "csv")) output_html_file_path = os.path.join(OUTPUT_MERGED_ERROR_LOG_LIST, "{0}.{1}".format(replace_name(csv_name_one.rstrip(".csv")), "html")) if not(os.path.isfile(error_csv_file_path)): raise Exception("Not found main csv. ('{0}')".format(error_csv_file_path.replace("'", "\\'"))) buffer_msg, children_buffer_msg_list = ("( )", "No problem."), list() if not(os.path.isfile(source_csv_file_path)): buffer_msg = ("(!)", "Not found a pair source file. ('{0}')".format(source_csv_file_path.replace("'", "\\'"))) print(" {0} {1}".format(buffer_msg[0], buffer_msg[1])) continue try: header_m, data_m = read_csv_file(error_csv_file_path, encoding="utf-8") header_s, data_s = read_csv_file(source_csv_file_path, encoding="utf-8") result, red_marker_col_list = list(), list() for dm in data_m: mm = dm + [None for _ in header_s] buffer_row_index = dm[INPUT_ERROR_LOGS_ACTION_ROW_NAME_ROW] if INPUT_ERROR_LOGS_ACTION_ROW_NAME_ROW is not None and INPUT_ERROR_LOGS_ACTION_ROW_NAME_ROW < len(dm) else None red_marker_col = None if buffer_row_index in header_s: red_marker_col = header_s.index(buffer_row_index) red_marker_col = len(header_m) + red_marker_col red_marker_col_list.append(red_marker_col) buffer_row_index = dm[INPUT_ERROR_LOGS_ERROR_CODE_NUMBER] if INPUT_ERROR_LOGS_ERROR_CODE_NUMBER is not None and INPUT_ERROR_LOGS_ERROR_CODE_NUMBER < len(dm) else None error_code_number = None if INPUT_ERROR_LOGS_ERROR_CODE_NUMBER is None or re.match(r"^[0-9]+$", buffer_row_index) is None else int(buffer_row_index) if error_code_number in BAN_TARGET_ERROR_CODE_SET: children_buffer_msg_list.append(("(C)", "continue row. ({0})". format(error_code_number))) continue buffer_row_index = dm[INPUT_ERROR_LOGS_MERGE_ROW_NUMBER] if INPUT_ERROR_LOGS_MERGE_ROW_NUMBER is not None and INPUT_ERROR_LOGS_MERGE_ROW_NUMBER < len(dm) else None merge_number = None if INPUT_ERROR_LOGS_MERGE_ROW_NUMBER is None or re.match(r"^[0-9]+$", buffer_row_index) is None else int(buffer_row_index) if merge_number is not None: if len(data_s) > merge_number: mm = dm + data_s[merge_number] children_buffer_msg_list.append(("( )", "matching row. ({0})". format(merge_number))) else: children_buffer_msg_list.append(("(!)", "Nothing row. ({0})". format(merge_number))) else: children_buffer_msg_list.append(("(!)", "Not found merged row. ('{0}')". format(buffer_row_index.replace("'", "\\'")))) result.append(mm) if len(data_m) != len(result): buffer_msg = ("(!)", "Failed to append. ({0} -> {1})".format(len(data_m), len(result))) write_csv_file(output_csv_file_path, result, header=header_m+header_s, encoding="utf-8") write_html_file( output_html_file_path, result, header=header_m+header_s, red_marker_col_list=red_marker_col_list, title="フィルタ除外一覧({0})".format(replace_name(csv_name_one.rstrip(".csv"))), css_text=BASE_CSS_TEXT, join_col_number=None if len(header_m) < 1 else len(header_m) ) except Exception as e: buffer_msg = ("(E)", "[{0}]: {1}".format(type(e).__name__, str(e))) print(traceback.format_exc()) print(" {0} {1}".format(buffer_msg[0], buffer_msg[1])) for cbm in children_buffer_msg_list: print(" {0} {1}".format(cbm[0], cbm[1])) print()
教師A
2025-03-09 22:34:58
説明用Excel生成資産
教師A
2025-03-09 22:35:39
import os, re, unicodedata, datetime import openpyxl as oxl CATCH_FILTER_RECORD_DIRECTORY_PATH_LIST = [ os.path.join(os.path.dirname(os.path.abspath(__file__)), "LOG", "FILTER_RECORD_01"), os.path.join(os.path.dirname(os.path.abspath(__file__)), "LOG", "FILTER_RECORD_02"), ] CATCH_FILTER_LOG_DIRECTORY_PATH = [ os.path.join(os.path.dirname(os.path.abspath(__file__)), "LOG", "FILTER_LOG_01"), os.path.join(os.path.dirname(os.path.abspath(__file__)), "LOG", "FILTER_LOG_02"), ] OUTPUT_EXCEL_FILE_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "LOG", "", "dump.xlsx") def merge_two_data( main_data_2d_list, sub_data_2d_list, main_title_list, sub_title_list, join_column_main_to_sub_dict, main_data_ban_column_set=set(), sub_data_ban_column_set=set(), only_first_match_mode=False ): if len(main_data_2d_list) > 0 and len({len(d) for d in main_data_2d_list}) != 1: raise ValueError("The structure of the main table is broken.({0})".format(", ".join(list({len(d) for d in main_data_2d_list})))) if len(sub_data_2d_list) > 0 and len({len(d) for d in sub_data_2d_list}) != 1: raise ValueError("The structure of the sub table is broken.".format(", ".join(list({len(d) for d in sub_data_2d_list})))) if len(main_data_2d_list) > 0 and len(main_data_2d_list[0]) != len(main_title_list): raise ValueError("The number of columns in the main table and the main table title are different. ({0}, {1})".format(len(main_data_2d_list[0]), len(main_title_list))) if len(sub_data_2d_list) > 0 and len(sub_data_2d_list[0]) != len(sub_title_list): raise ValueError("The number of columns in the sub table and the sub table title are different. ({0}, {1})".format(len(sub_data_2d_list[0]), len(sub_title_list))) for jck in join_column_main_to_sub_dict.keys(): if jck not in main_title_list: raise ValueError("There are no join columns in the main table title. ('{0}')".format(str(jck).replace("'", "\\'"))) for jcv in join_column_main_to_sub_dict.values(): if jcv not in sub_title_list: raise ValueError("There are no join columns in the sub table title. ('{0}')".format(str(jcv).replace("'", "\\'"))) join_index_main_list, join_index_sub_list = list(), list() for k, v in join_column_main_to_sub_dict.items(): join_index_main_list.append(main_title_list.index(k)) join_index_sub_list.append(sub_title_list.index(v)) ban_main_column_set = {main_title_list.index(s) for s in list(main_data_ban_column_set)} ban_sub_column_set = {sub_title_list.index(s) for s in list(sub_data_ban_column_set)} buffer_title_list = [main_title_list[i] for i in range(len(main_title_list)) if i not in ban_main_column_set] + [sub_title_list[i] for i in range(len(sub_title_list)) if i not in ban_sub_column_set] join_point = len([main_title_list[i] for i in range(len(main_title_list)) if i not in ban_main_column_set]) error_list = list() buffer_2d_data = list() for main_d in main_data_2d_list: main_join_value_list = [main_d[jimo] for jimo in join_index_main_list] main_buffer = [main_d[i] for i in range(len(main_d)) if i not in ban_main_column_set] reject_flag = True for sub_d in sub_data_2d_list: sub_join_value_list = [sub_d[jiso] for jiso in join_index_sub_list] if all([m==s for m, s in zip(main_join_value_list, sub_join_value_list)]): sub_buffer = [sub_d[i] for i in range(len(sub_d)) if i not in ban_sub_column_set] buffer_2d_data.append(main_buffer + sub_buffer) reject_flag = False if only_first_match_mode: break if reject_flag: error_list.append("Found an unmatch record. ({0})".format(", ".join(["'{0}':'{1}'".format(str(t).replace("'", "\\'"), str(v).replace("'", "\\'")) for t, v in zip(main_title_list, main_d)]))) return (buffer_title_list, buffer_2d_data, join_point, error_list) MOTHER_TITLE = "みよし市 FBSS(住民記録・印鑑登録) DB フィルタリングレコード一覧" class Report_generator: def __init__(self, output_path, mother_title=MOTHER_TITLE): self.output_path = output_path self.wb = wb = oxl.Workbook() self.ws_name_set = set() self.begin_datetime_text = datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S") self.mother_title = str(mother_title) self.active_row_dict = dict() def set( self, sheet_name, title_text, title_list, data_2d_list, join_point=None, search_column_name=None, right_flag_list=None, filter_dict=None ): right_flag_list_true = [(True if right_flag_list[i] else False ) if len(right_flag_list) > i else False for i in range(len(title_list))] if right_flag_list is not None else [False for _ in range(len(title_list))] search_column_index = None if join_point is None: if search_column_name in title_list: search_column_index = title_list.index(search_column_name) else: if search_column_name in title_list[:join_point]: search_column_index = title_list.index(search_column_name) ws = self.wb.create_sheet(sheet_name, len(self.wb.sheetnames)) self.ws_name_set.add(sheet_name) ws.freeze_panes = ws.cell(5, 1).coordinate #行の固定 max_point = [4, len(title_list)+1] green_list = list() for y, d in enumerate(data_2d_list): #値 memory_value, memory_index = d[search_column_index], None if join_point is None: if memory_value in title_list[join_point:]: memory_index = title_list.index(memory_value) else: if memory_value in title_list: memory_index = title_list.index(memory_value) for x, dd in enumerate(d): c = ws.cell(y+5, x+2) if right_flag_list_true[x]: c.value = int(dd) c.number_format = "0" else: c.value = str(dd) c.number_format = "@" if memory_index is not None and x == memory_index: green_list.append((y+5, x+2)) max_point[0] = y+5 side = oxl.styles.borders.Side(style="thin", color="000000") for y in range(4, max_point[0]+1): #枠 for x in range(2, max_point[1]+1): if join_point is not None and x == join_point + 1: ws.cell(y, x).border = oxl.styles.borders.Border( top=side, bottom=side, left=side, right=oxl.styles.borders.Side(style="double", color="000000") ) else: ws.cell(y, x).border = oxl.styles.borders.Border(top=side, bottom=side, left=side, right=side) for y in range(1, max_point[0]+1): #白塗り for x in range(1, max_point[1]+1): ws.cell(y, x).fill = oxl.styles.PatternFill(patternType="solid", fgColor="FFFFFF") for g in green_list: ws.cell(g[0], g[1]).fill = oxl.styles.PatternFill(patternType="solid", fgColor="A0FFA0") for x, tl in enumerate(title_list): #列タイトル c = ws.cell(4, x+2) c.value = str(tl) c.number_format = "@" c.font = oxl.styles.Font(bold = True) c.fill = oxl.styles.PatternFill( patternType="solid", fgColor="A0A0FF" if join_point is not None and join_point > x else "D0D0FF" ) for col in ws.columns: #シート幅の自動調整 max_length = 0 column = col[0].column_letter for cell in col: try: if len(str(cell.value)) > max_length: max_length = len(cell.value) except: pass adjusted_width = (max_length + 1) * 2 ws.column_dimensions[column].width = adjusted_width + 2.5 c = ws.cell(1, 1) c.value = str(self.mother_title) c.font = oxl.styles.Font(bold=True, size=20) c = ws.cell(2, 2) c.value = "- {0}".format(title_text) c.font = oxl.styles.Font(bold=True, size=15) ws.row_dimensions[3].height = 7.5 ws.merge_cells(start_row=1, start_column=1, end_row=1, end_column=join_point+1 if join_point is None else len(title_list)+1) ws.merge_cells(start_row=2, start_column=2, end_row=2, end_column=join_point+1 if join_point is None else len(title_list)+1) for row in ws: #シート全体のフォントを変更 for cell in row: cell.font = oxl.styles.Font(name="HG明朝B") ws.auto_filter.ref = "${0}${1}:${2}${3}".format( oxl.utils.get_column_letter(2), 4, oxl.utils.get_column_letter(max_point[1]), max_point[0] ) #オートフィルタ設定 self.active_row_dict[sheet_name] = {row[0].row:True for row in ws.iter_rows(min_row=5, max_row=max_point[0])} for k, v in filter_dict.items(): buffer_index = None if k in title_list: buffer_index = title_list.index(str(k)) if buffer_index is not None and type(v) in (list, tuple, set): ws.auto_filter.add_filter_column(buffer_index, list(v)) #オートフィルタ再指定 for row in ws.iter_rows(min_row=5, max_row=max_point[0]): #オートフィルタは自動設定されないため、手動定義 if row[buffer_index+2-1].value not in v: ws.row_dimensions[row[0].row].hidden = True self.active_row_dict[sheet_name][row[0].row] = False def dump(self, index_sheet_name="テーブル一覧", filter_title="有効"): for ws in self.wb.worksheets: #配下のsheet以外は削除 if ws.title not in self.ws_name_set: self.wb.remove(ws) ws = self.wb.create_sheet(str(index_sheet_name), 0) max_point, blue_font_list = [4, 8], list() for y, wsx in enumerate([wx for wx in self.wb.worksheets if wx != ws]): #値 c = ws.cell(y+5, 2) c.number_format = "0" c.value = y + 1 c = ws.cell(y+5, 3) c.number_format = "@" c.value = str(wsx.title) c.hyperlink = "#{0}!A1".format(wsx.title) blue_font_list.append((y+5, 3)) max_point[0] = y+5 c = ws.cell(y+5, 4) c.number_format = "0" c.value = int(len([f for f in self.active_row_dict[wsx.title].values() if f])) c = ws.cell(y+5, 5) c.number_format = "0" c.value = int(len(self.active_row_dict[wsx.title])) side = oxl.styles.borders.Side(style="thin", color="000000") for y in range(4, max_point[0]+1): #枠 for x in range(2, 6): ws.cell(y, x).border = oxl.styles.borders.Border(top=side, bottom=side, left=side, right=side) for y in range(1, max_point[0]+1): #白塗り for x in range(1, max_point[1]+1): ws.cell(y, x).fill = oxl.styles.PatternFill(patternType="solid", fgColor="FFFFFF") for x, tl in enumerate(["No.", "テーブル名称", "{0}行数".format(filter_title), "総検出行数"]): #列タイトル c = ws.cell(4, x+2) c.value = str(tl) c.number_format = "@" c.font = oxl.styles.Font(bold=True) c.fill = oxl.styles.PatternFill(patternType="solid", fgColor="A0A0FF") for col in ws.columns: #自動幅調整 max_length = 0 column = col[0].column_letter for cell in col: try: if len(str(cell.value)) > max_length: max_length = len(cell.value) except: pass adjusted_width = (max_length + 1) * 2 ws.column_dimensions[column].width = adjusted_width ws.column_dimensions[ws.cell(1, 3).column_letter].width = 100 c = ws.cell(1, 1) c.value = str(self.mother_title) c.font = oxl.styles.Font(bold=True, size=20) c = ws.cell(2, 2) c.value = "- {0}".format(index_sheet_name) c.font = oxl.styles.Font(bold=True, size=15) c = ws.cell(1, 4) c.value = "作成日時: {0}".format(self.begin_datetime_text) c.fill = oxl.styles.PatternFill(patternType="solid", fgColor="FFFFFF") ws.row_dimensions[3].height = 7.5 ws.merge_cells(start_row=1, start_column=1, end_row=1, end_column=3) ws.merge_cells(start_row=2, start_column=2, end_row=2, end_column=3) ws.merge_cells(start_row=1, start_column=4, end_row=1, end_column=8) for row in ws: #シート全体のフォントを変更 for cell in row: if (cell.row, cell.column) in blue_font_list: cell.font = oxl.styles.Font( name="HG明朝B", color="0000DD", underline="single" ) else: cell.font = oxl.styles.Font(name="HG明朝B") def save(self): os.makedirs(os.path.dirname(self.output_path), exist_ok=True) self.wb.save(self.output_path) def close(self): try: self.wb.close() except Exception: pass
教師A
2025-03-09 22:35:48
def check_column_type_is_free_int(data_2d_list, title_list, regex_column_name=r"^.*(番号)$"): if len(data_2d_list) > 0 and len(data_2d_list[0]) != len(title_list): raise ValueError("The number of columns in the table and the table title are different. ({0}, {1})".format(len(data_2d_list[0]), len(title_list))) return_flag_list = [True for _ in range(len(title_list))] for i in range(len(title_list)): if regex_column_name is None or re.search(regex_column_name, title_list[i]) is not None: for d in data_2d_list: if d[i] is not None: try: int(d[i]) except TypeError: return_flag_list[i] = False else: return_flag_list[i] = False return return_flag_list EX_MAIN_TITLE_LIST = ["ログ番号", "区分", "接続コード", "モード", "ロジック"] EX_MAIN_DATA_LIST = [ ["001", "1", "01", "データ1", "MAIN-01"], ["002", "1", "01", "データ1", "MAIN-01"], ["003", "1", "02", "データ1", "MAIN-01"], ["004", "1", "03", "データ2", "MAIN-01"], ["005", "2", "01", "データ3", "MAIN-10"], ["006", "2", "02", "データ2", "MAIN-10"], ["007", "2", "02", "データ2", "MAIN-10"], ] EX_SUB_TITLE_LIST = ["区分", "接続コードA", "データ1" ,"データ2"] EX_SUB_DATA_LIST = [ ["1", "01", "あ", "わ"], ["1", "02", "い", "わ"], ["1", "03", "い", "わ"], ["2", "02", "あ", "を"], ] BAN_EX_SUB_TITLE_SET = {"区分", "接続コードA"} if "__main__" == __name__: title_list, data_2d_list, join_point, error_list = merge_two_data( EX_MAIN_DATA_LIST, EX_SUB_DATA_LIST, EX_MAIN_TITLE_LIST, EX_SUB_TITLE_LIST, {"区分":"区分", "接続コード":"接続コードA"}, main_data_ban_column_set=set(), sub_data_ban_column_set=BAN_EX_SUB_TITLE_SET, only_first_match_mode=False ) print("{0} ( {1} )".format(title_list, join_point)) for d in data_2d_list: print(d) print() for e in error_list: print(e) right_flag_list = check_column_type_is_free_int(data_2d_list, title_list, regex_column_name=r"^.*(番号)$") rg = Report_generator(output_path=OUTPUT_EXCEL_FILE_PATH, mother_title=MOTHER_TITLE) rg.set( sheet_name="テスト", title_text="テスト", title_list=title_list, data_2d_list=data_2d_list, join_point=join_point, search_column_name="モード", right_flag_list=right_flag_list, filter_dict={"区分":["1"]} ) rg.dump(index_sheet_name="テーブル一覧", filter_title="主体除外") rg.save() rg.close()
教師A
2025-03-10 06:28:54
バッチファイルでDATETIME
set timex=%time: =0% set year=%date:~0,4% set month=%date:~5,2% set day=%date:~8,2% set hour=%timex:~0,2% set minute=%timex:~3,2% set second=%timex:~6,2% set filename=%year%%month%%date%_%hour%%minute%%second%
教師A
2025-03-15 19:13:15
デコンパイルの失敗を抽出
import glob import os def find_orphan_classes(root_path): orphan_classes = [] class_files = {os.path.splitext(f)[0] for f in glob.glob(f"{root_path}/**/*.class", recursive=True)} java_files = {os.path.splitext(f)[0] for f in glob.glob(f"{root_path}/**/*.java", recursive=True)} for class_file in class_files: if class_file not in java_files: orphan_classes.append(class_file + ".class") return orphan_classes # 使用例 ROOT_PATH = "path/to/your/root/directory" # 任意のパスに変更 orphan_files = find_orphan_classes(ROOT_PATH) print("以下の.classファイルに対応する.javaファイルが見つかりません:") for file in orphan_files: print(file)
教師A
2025-04-29 13:40:33
PAC10進数化JAVAコード
public class Main { /** * Long型の値をPAC(10進Packed Decimal)形式のbyte配列に変換する。 * * @param value Long型の数値(符号あり) * @param byteCount 出力するbyte配列のサイズ(PACのバイト数) * @return PAC形式のbyte配列 */ public static byte[] toPackedDecimal(long value, int byteCount) { boolean isNegative = value < 0; String digits = Long.toString(Math.abs(value)); if (digits.length() % 2 == 0) { digits = "0" + digits; } //桁数が偶数でない場合、先頭に0を追加 int requiredBytes = (digits.length() + 1) / 2; //符号が最後の4bitに入る if (requiredBytes > byteCount) { throw new IllegalArgumentException(String.format("Value is too large to fit in the specified byte count. (InputValue: %d, NeedBytesCount: %d, InputBytes: %d)", value, requiredBytes, byteCount)); } byte[] result = new byte[byteCount]; char signNibble = isNegative ? 'D' : 'C'; //末尾の符号(C(1100): 正、D(1101): 負) digits = digits + signNibble; int digitIndex = 0; for (int i = byteCount - requiredBytes; i < byteCount; i++) { int highNibble = Character.digit(digits.charAt(digitIndex++), 16); int lowNibble = Character.digit(digits.charAt(digitIndex++), 16); result[i] = (byte) ((highNibble << 4) | lowNibble); } return result; } /** * String型の数値を自動的に適切なバイト長でPAC変換する。 * * @param numberString 符号付きまたは符号なしの10進数文字列(例:"1234", "-56789") * @return PAC形式のbyte配列 */ public static byte[] toPackedDecimalFromString(String numberString) { if (numberString == null || numberString.isEmpty()) { throw new IllegalArgumentException("Input string is null or empty."); } long value; try { value = Long.parseLong(numberString); } catch (NumberFormatException e) { throw new IllegalArgumentException("Invalid number format: " + numberString); } int digitCount = numberString.startsWith("-") || numberString.startsWith("+") ? numberString.length() - 1 : numberString.length(); //桁数カウント(符号は除外) int totalDigitsWithSign = (digitCount % 2 == 0) ? digitCount + 2 : digitCount + 1; //奇数桁→1桁追加、+1は符号nibble int byteCount = totalDigitsWithSign / 2; //4bitで1桁 return toPackedDecimal(value, byteCount); } //テスト用 mainメソッド public static void main(String[] args) { String[] testInputs = {"", "1234", "-56789", "1", "-987654321"}; for (String input : testInputs) { byte[] pac = toPackedDecimalFromString(input); System.out.printf("Input: %20s, PAC10: ", input); for (byte b : pac) { System.out.printf("%02X ", b); } System.out.println(); } } }
教師A
2025-05-07 18:22:53
教師A
2025-05-08 23:02:26
教師A
2025-05-11 00:53:34
教師A
2025-05-11 20:46:00
AES-GCMにした。まあ、パディングオラクルとは無縁だけれど、改竄チェックはあってうれしい
教師A
2025-06-01 16:54:44
Google Driveを再起動するバッチ(他にも流用可能)
@echo off setlocal echo [--]: Google Driveの再起動を行います。 REM プロセス名を設定する set "PROCESS_NAME=GoogleDriveFS.exe" REM Google Driveプロセスを終了する echo [--]: Google Driveプロセスを終了します。 :KILL_PROCESS taskkill /F /IM %PROCESS_NAME% > nul 2>&1 if %ERRORLEVEL% NEQ 0 ( echo [OK]: 全てのGoogle Driveプロセスが終了しました。 goto FIND_PATH ) echo [ ]: Google Driveプロセスを終了しています... REM 待機 timeout /t 2 /nobreak > nul goto KILL_PROCESS :FIND_PATH REM Google Driveの実行ファイルパスを見つける echo [--]: Google Driveの実行ファイルパスを検索します。 set "GOOGLE_DRIVE_PATH=" for /f "tokens=*" %%i in ('where /R "C:\Program Files" %PROCESS_NAME% 2^>nul') do ( set "GOOGLE_DRIVE_PATH=%%i" goto FOUND_PATH ) for /f "tokens=*" %%i in ('where /R "C:\Program Files (x86)" %PROCESS_NAME% 2^>nul') do ( set "GOOGLE_DRIVE_PATH=%%i" goto FOUND_PATH ) if not defined GOOGLE_DRIVE_PATH ( echo [NG]: Google Driveの実行ファイルパスが見つかりませんでした。 pause exit /b ) :FOUND_PATH REM 検出したパスを表示 echo [OK]: Google Driveの実行ファイルパスが見つかりました。( パス: '%GOOGLE_DRIVE_PATH%' ) REM パスが存在するか確認 echo [--]: Google Driveの実行ファイルの存在確認を行います。 if not exist "%GOOGLE_DRIVE_PATH%" ( echo [NG]: Google Driveの実行ファイルが見つかりません。 pause exit /b ) echo [OK]: Google Driveの実行ファイルの存在が確認されました。 REM 待機 timeout /t 5 /nobreak > nul REM Google Driveを再起動する echo [--]: Google Driveを実行します。 start "" "%GOOGLE_DRIVE_PATH%" if %ERRORLEVEL% NEQ 0 ( echo [NG]: Google Driveの再起動に失敗しました。( エラーコード: %ERRORLEVEL% ) pause exit /b ) echo [OK]: Google Driveは正常に再起動されました。 REM 待機 timeout /t 5 /nobreak > nul REM プロセスが再起動されたか確認する echo [--]: Google Driveプロセスの存在を確認します。 tasklist /FI "IMAGENAME eq %PROCESS_NAME%" | find /I "%PROCESS_NAME%" > nul if %ERRORLEVEL% NEQ 0 ( echo [NG]: Google Driveのプロセス確認に失敗しました。 pause exit /b ) echo [OK]: Google Driveプロセスが正常に再起動されていることを確認しました。 if /I not "%1" == "SKIP" ( if /I not "%1" == "SKI" ( if /I not "%1" == "SK" ( if /I not "%1" == "S" ( REM ここに引数が条件にマッチしない場合の処理を記述する echo. echo 終了するには何かキーを押してください . . . pause > nul ) ) ) ) endlocal
教師A
2025-06-08 22:37:39
一括解凍機能を追加(これだと自己解凍機能の意味が無いけれど)
教師A
2025-06-15 19:43:55
pythonコードをバッチから順序あり並列起動させる方法を検討する
(特殊フラグによる強制終了機能があったほうがよいかも)
教師A
2025-06-21 17:43:28
cmd /c start /b "" /wait /high python target.py
教師A
2025-06-22 21:32:44
cmd /c で確実にコマンドプロンプトコマンドとして読み込ませる
start /b "" /wait /high 新しいプロセスを高優先度で立ち上げる(実行後まで待つこと)
python target.py これは任意コマンド
教師A
2025-07-10 13:51:43
PSQLで和暦→西暦変換を行いたい
CREATE OR REPLACE FUNCTION wareki_to_seireki(wdate TEXT) RETURNS TEXT AS $$ DECLARE gengo_code TEXT; year_part TEXT; month_part TEXT; day_part TEXT; gengos RECORD; base_year INT; era_year INT; result_date TEXT; target_date DATE; start_date DATE; end_date DATE; BEGIN -- 8桁数字以外はエラー IF length(wdate) <> 8 OR wdate ~ '[^0-9]' THEN RETURN '99999999'; END IF; gengo_code := substring(wdate, 1, 2); year_part := substring(wdate, 3, 2); month_part := substring(wdate, 5, 2); day_part := substring(wdate, 7, 2); -- すでに西暦表記の可能性(20xxなど)→ そのまま返す IF gengo_code BETWEEN '20' AND '99' THEN RETURN wdate; END IF; -- 和暦の元号テーブル(元号コード, 開始日, 終了日) -- 開始日はその元号の"1年1月1日"ではなく、実際の施行日 FOR gengos IN SELECT * FROM (VALUES ('94', DATE '1989-01-08', DATE '2019-04-30'), -- 平成:1989-01-08 〜 2019-04-30 ('95', DATE '2019-05-01', NULL) -- 令和:2019-05-01 〜 現在 ) AS g(code, start_date, end_date) LOOP IF gengo_code = gengos.code THEN start_date := gengos.start_date; end_date := COALESCE(gengos.end_date, DATE '9999-12-31'); EXIT; END IF; END LOOP; IF start_date IS NULL THEN RETURN '99999999'; -- 対応元号がない END IF; -- 年・月・日を構成して変換候補日を作る BEGIN era_year := year_part::INT; result_date := (EXTRACT(YEAR FROM start_date)::INT + era_year - 1)::TEXT || month_part || day_part; target_date := to_date(result_date, 'YYYYMMDD'); EXCEPTION WHEN OTHERS THEN RETURN '99999999'; -- 無効な日付 END; -- 元号の有効範囲にあるか検証 IF target_date < start_date OR target_date > end_date THEN RETURN '99999999'; -- 範囲外 END IF; RETURN to_char(target_date, 'YYYYMMDD'); END; $$ LANGUAGE plpgsql;
教師A
2025-07-21 23:16:01
教師A
2025-07-21 23:17:13
定義注入部分がモジュール依存であり、非OracleDB環境だと使えないためタイトルリストの生成を代行する必要あり。個人情報保護の観点からパラメータリストは使用しなくともよい
教師A
2025-07-31 07:28:59
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>SVG グループ親子サンプル</title> <style> svg { border: 1px solid #ccc; user-select: none; } circle.parent { fill: #aaf; stroke: #4477cc; stroke-width: 2px; cursor: move; } circle.child { fill: #faa; stroke: #cc4444; stroke-width: 1.5px; cursor: pointer; } </style> </head> <body> <svg id="svg" width="600" height="400"></svg> <script> const svg = document.getElementById('svg'); // 親グループ要素を作成 const parentGroup = document.createElementNS('http://www.w3.org/2000/svg', 'g'); parentGroup.setAttribute('transform', 'translate(150,150)'); svg.appendChild(parentGroup); // 親ノードを描画(大きな円) const parentCircle = document.createElementNS('http://www.w3.org/2000/svg', 'circle'); parentCircle.setAttribute('r', 80); parentCircle.classList.add('parent'); parentGroup.appendChild(parentCircle); // 子ノードの相対位置データ const childrenData = [ { id: 'c1', cx: -40, cy: -30 }, { id: 'c2', cx: 50, cy: 20 }, { id: 'c3', cx: 0, cy: 60 }, ]; // 子ノードを親グループ内に配置 childrenData.forEach(child => { const c = document.createElementNS('http://www.w3.org/2000/svg', 'circle'); c.setAttribute('r', 20); c.setAttribute('cx', child.cx); c.setAttribute('cy', child.cy); c.classList.add('child'); c.dataset.id = child.id; parentGroup.appendChild(c); // クリックイベント c.addEventListener('click', e => { alert(`Clicked child node: ${child.id}`); e.stopPropagation(); // 親への伝播防止 }); }); // ドラッグ用の変数 let isDragging = false; let dragStart = { x: 0, y: 0 }; let currentTranslate = { x: 150, y: 150 }; parentGroup.addEventListener('mousedown', e => { isDragging = true; dragStart = { x: e.clientX, y: e.clientY }; e.preventDefault(); }); window.addEventListener('mousemove', e => { if (!isDragging) return; const dx = e.clientX - dragStart.x; const dy = e.clientY - dragStart.y; const newX = currentTranslate.x + dx; const newY = currentTranslate.y + dy; parentGroup.setAttribute('transform', `translate(${newX},${newY})`); }); window.addEventListener('mouseup', e => { if (isDragging) { const dx = e.clientX - dragStart.x; const dy = e.clientY - dragStart.y; currentTranslate.x += dx; currentTranslate.y += dy; isDragging = false; } }); </script> </body> </html>
教師A
2025-07-31 07:45:39
グラフの正常性について
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Cytoscape Directed Graph Example</title> <script src="https://unpkg.com/cytoscape@3.26.0/dist/cytoscape.min.js"></script> <style> #cy { width: 800px; height: 600px; border: 1px solid #aaa; display: block; } </style> </head> <body> <div id="cy"></div> <script> const cy = cytoscape({ container: document.getElementById('cy'), elements: [ // ノード { data: { id: 'A' } }, { data: { id: 'B' } }, { data: { id: 'C' } }, { data: { id: 'D' } }, { data: { id: 'X' } }, { data: { id: 'Y' } }, // エッジ(有向) { data: { id: 'B_to_A', source: 'B', target: 'A' } }, { data: { id: 'C_to_A', source: 'C', target: 'A' } }, { data: { id: 'D_to_A', source: 'D', target: 'A' } }, { data: { id: 'A_to_X', source: 'A', target: 'X' } }, { data: { id: 'A_to_Y', source: 'A', target: 'Y' } }, ], style: [ { selector: 'node', style: { 'label': 'data(id)', 'background-color': '#61bffc', 'text-valign': 'center', 'text-halign': 'center', 'width': 50, 'height': 50, 'font-size': 16, 'color': '#000' } }, { selector: 'edge', style: { 'width': 3, 'line-color': '#888', 'target-arrow-color': '#888', 'target-arrow-shape': 'triangle', 'curve-style': 'bezier' } } ], layout: { name: 'breadthfirst', directed: true, padding: 30 } }); </script> </body> </html>
教師A
2025-09-25 07:50:37
@echo off setlocal enabledelayedexpansion :: 環境変数の例 :: set "FILE_PATH_LIST=C:\test data\a b.txt;D:\other:drive\sample file.txt" :: set "RECEIVE_DIRECTORY_PATH=C:\receive" for %%A in ("%FILE_PATH_LIST:;=" "%") do ( set "src=%%~A" :: コロンをアンダースコアに置換 set "relpath=!src::=_!" :: ドライブレター (例: C:\) を削除し相対部分を取り出す set "relpath=!relpath:~2!" :: コピー先のフルパス set "dest=%RECEIVE_DIRECTORY_PATH%!relpath!" :: コピー先ディレクトリを作成 for %%D in ("!dest!") do ( if not exist "%%~dpD" ( mkdir "%%~dpD" ) ) :: ファイルをコピー copy "!src!" "!dest!" >nul ) endlocal
教師A
2025-09-25 08:03:35
@echo off setlocal enabledelayedexpansion :: コピー先のルート set "RECEIVE_DIRECTORY_PATH=C:\receive" :: パスリストファイル set "LIST_FILE=path_list.txt" for /f "usebackq delims=" %%A in ("%LIST_FILE%") do ( set "src=%%A" :: コロンをアンダースコアに置換 set "relpath=!src::=_!" :: ドライブレター (例 C:\) を削除 set "relpath=!relpath:~2!" :: コピー先のフルパス set "dest=%RECEIVE_DIRECTORY_PATH%!relpath!" :: コピー先ディレクトリを作成 for %%D in ("!dest!") do ( if not exist "%%~dpD" mkdir "%%~dpD" ) :: ファイルコピー copy "!src!" "!dest!" >nul ) endlocal
教師A
2025-11-24 17:17:23
教師A
2025-11-24 17:17:31
#!/usr/bin/env python3 """ jif_analyze.py 大量の .JIF ファイルからバイナリ構造を推察するための解析ツール 主な機能: - ヘッダ(先頭 N バイト)の頻度集計 - 先頭4バイトを uint32 として分布確認(例: 0x00000190 = 400 の発生頻度) - バイト周波数ヒストグラムとエントロピー(ファイル単位、スライディングウィンドウ) - 既知マジックナンバー(PNG/JPEG/GIF/TIFF/BMP/ZIP/zlib/PKCS)の位置検索 - JPEG/PNG 等の埋め込み抽出(見つかれば別ファイルに保存) - 代表サンプルの16進ダンプ出力 - 結果を CSV と JSON で保存 要件: Python 3.8+ 推奨: pandas, numpy, matplotlib(プロットを作る場合) """ import os import sys import argparse import struct import binascii import json import math from collections import Counter, defaultdict from pathlib import Path from concurrent.futures import ProcessPoolExecutor, as_completed # --- 設定 --- DEFAULT_HEAD_BYTES = 512 # 先頭 N バイトを取得 SLIDING_WINDOW = 1024 # スライドでエントロピーを見るウィンドウ(省略可) SAMPLE_COUNT = 10 # 代表ヘッダごとに出力するサンプル数 KNOWN_MAGIC = { 'JPEG_SOI': bytes.fromhex('FFD8FF'), 'JPEG_SOI_ALT': bytes.fromhex('FFD8FFE0'), # JFIF 'PNG': bytes.fromhex('89504E470D0A1A0A'), 'GIF87a': b'GIF87a', 'GIF89a': b'GIF89a', 'BMP': b'BM', 'TIFF_LE': bytes.fromhex('49492A00'), 'TIFF_BE': bytes.fromhex('4D4D002A'), 'ZIP': bytes.fromhex('504B0304'), 'ZLIB_78_01': bytes.fromhex('7801'), 'ZLIB_78_9C': bytes.fromhex('789C'), 'ZLIB_78_DA': bytes.fromhex('78DA'), 'PDF': b'%PDF-', } # 出力ディレクトリ OUT_DIR = Path("results_jif") OUT_DIR.mkdir(exist_ok=True) # ---------------------------------------------------------- # ユーティリティ # ---------------------------------------------------------- def read_head(path: Path, n=DEFAULT_HEAD_BYTES): with path.open('rb') as f: return f.read(n) def file_size(path: Path): return path.stat().st_size def uint32_le(b: bytes): if len(b) < 4: return None return struct.unpack('<I', b[:4])[0] def byte_freq(data: bytes): c = Counter(data) freq = [c.get(i, 0) for i in range(256)] return freq def entropy_from_counts(counts, total): ent = 0.0 for c in counts: if c == 0: continue p = c / total ent -= p * math.log2(p) return ent def file_entropy(data: bytes): counts = byte_freq(data) return entropy_from_counts(counts, len(data)) if len(data)>0 else 0.0 def find_subseq_positions(data: bytes, subseq: bytes): """subseq を data 内で全て探してオフセットリストを返す""" pos = [] start = 0 while True: i = data.find(subseq, start) if i == -1: break pos.append(i) start = i + 1 return pos def hexdump_snippet(data: bytes, length=64): return binascii.hexlify(data[:length]).decode('ascii') # ---------------------------------------------------------- # ファイル単位の解析(Worker) # ---------------------------------------------------------- def analyze_file(path): try: head = read_head(path, DEFAULT_HEAD_BYTES) size = file_size(path) u32 = uint32_le(head[:4]) head_hex = binascii.hexlify(head).decode('ascii') first8 = head[:8] # byte freq + entropy(先頭 N バイト) freq = byte_freq(head) ent = entropy_from_counts(freq, len(head)) if len(head)>0 else 0.0 # 全ファイルから検出したい known magic の探索(ファイル全体を読む) positions = {} with path.open('rb') as f: full = f.read() for name, sig in KNOWN_MAGIC.items(): # 最初の数ヶ所だけ見つける pos = find_subseq_positions(full, sig) if pos: positions[name] = pos[:10] # 最大10件だけ保存 # JPEG の SOI+EOI の可能性(簡易) jpeg_ranges = [] if 'JPEG_SOI' in positions or 'JPEG_SOI_ALT' in positions: # find EOI 0xFFD9 soi_pos = positions.get('JPEG_SOI', []) + positions.get('JPEG_SOI_ALT', []) eoi = bytes.fromhex('FFD9') for s in soi_pos: # search for EOI after s idx = full.find(eoi, s+2) if idx != -1: jpeg_ranges.append((s, idx+2)) # 先頭4バイトのパターン(hex) head4_hex = binascii.hexlify(head[:4]).decode('ascii') if len(head)>=4 else '' return { 'path': str(path), 'name': path.name, 'size': size, 'head4_uint32_le': u32, 'head4_hex': head4_hex, 'head_hex': head_hex[:256], # 長すぎないよう切る 'entropy_head': ent, 'found_magic': {k: v for k,v in positions.items()}, 'jpeg_ranges': jpeg_ranges, } except Exception as e: return {'path': str(path), 'error': str(e)} # ---------------------------------------------------------- # 集計・レポート作成 # ---------------------------------------------------------- def aggregate_results(results): # 頻出ヘッダ(先頭16バイト) head_counter = Counter() u32_counter = Counter() magic_counter = Counter() entropies = [] sizes = [] for r in results: if 'error' in r: continue head16 = r.get('head_hex', '')[:32] head_counter[head16] += 1 u32_counter[r.get('head4_uint32_le')] += 1 sizes.append(r.get('size',0)) entropies.append(r.get('entropy_head',0.0)) for k in r.get('found_magic',{}).keys(): magic_counter[k] += 1 summary = { 'total_files': len(results), 'unique_top16_headers': len(head_counter), 'top_headers': head_counter.most_common(20), 'u32_distribution_sample': u32_counter.most_common(20), 'magic_counts': magic_counter.most_common(), 'size_stats': { 'min': min(sizes) if sizes else 0, 'max': max(sizes) if sizes else 0, 'mean': sum(sizes)/len(sizes) if sizes else 0, }, 'entropy_stats': { 'min': min(entropies) if entropies else 0, 'max': max(entropies) if entropies else 0, 'mean': sum(entropies)/len(entropies) if entropies else 0, } } return summary # ---------------------------------------------------------- # 抽出処理(JPEG/PNG 等を見つけたらファイルに書き出す) # ---------------------------------------------------------- def extract_embedded(results): extract_dir = OUT_DIR / "extracted" extract_dir.mkdir(exist_ok=True) count = 0 for r in results: if 'error' in r: continue path = Path(r['path']) try: with path.open('rb') as f: data = f.read() # JPEG 抽出(簡易): SOI..EOI の範囲を保存 for (s,e) in r.get('jpeg_ranges', []): outp = extract_dir / f"{path.name}.jpeg_{s}_{e}.jpg" with outp.open('wb') as fo: fo.write(data[s:e]) count += 1 # PNG 抽出: search png signature and try to find IEND chunk png_sig = KNOWN_MAGIC['PNG'] start = 0 while True: s = data.find(png_sig, start) if s == -1: break # PNG end chunk: IEND (00 00 00 00 49 45 4E 44 AE 42 60 82) iend = b'\x00\x00\x00\x00IEND\xaeB`\x82' e = data.find(iend, s) if e != -1: e += len(iend) outp = extract_dir / f"{path.name}.png_{s}_{e}.png" with outp.open('wb') as fo: fo.write(data[s:e]) count += 1 start = e else: start = s + 8 except Exception as ex: print("extract error:", path, ex) return count # ---------------------------------------------------------- # メイン # ---------------------------------------------------------- def main(args): p = Path(args.input) files = list(p.rglob("*.jif")) + list(p.rglob("*.JIF")) print(f"found {len(files)} .JIF files under {p}") results = [] # 並列で解析 with ProcessPoolExecutor(max_workers=args.workers) as ex: futures = {ex.submit(analyze_file, f): f for f in files} for fut in as_completed(futures): res = fut.result() results.append(res) # 集計 summary = aggregate_results(results) # JSON/CSV 出力 OUT_DIR.mkdir(exist_ok=True) (OUT_DIR / "file_results.json").write_text(json.dumps(results, indent=2)) (OUT_DIR / "summary.json").write_text(json.dumps(summary, indent=2)) # 書き出しの簡易CSV import csv csvfile = OUT_DIR / "file_results.csv" with csvfile.open('w', newline='', encoding='utf-8') as fo: writer = csv.writer(fo) writer.writerow(['path','name','size','head4_uint32_le','head4_hex','entropy_head','found_magic']) for r in results: writer.writerow([r.get('path'), r.get('name'), r.get('size'), r.get('head4_uint32_le'), r.get('head4_hex'), r.get('entropy_head'), ';'.join(r.get('found_magic',{}).keys())]) # 抽出 extracted = extract_embedded(results) print("summary:", json.dumps(summary, indent=2)) print(f"extracted {extracted} embedded files (jpeg/png) into {OUT_DIR / 'extracted'}") print("results written into:", OUT_DIR) if __name__ == "__main__": parser = argparse.ArgumentParser(description="Analyze .JIF files to infer binary structure.") parser.add_argument("input", help="directory containing .JIF files") parser.add_argument("--workers", type=int, default=4, help="parallel workers") args = parser.parse_args() main(args)
教師A
2025-11-24 17:17:47
#!/usr/bin/env python3 """ bitmap_range_scan.py 多数の .JIF ファイルから、非圧縮ビットマップ(低エントロピー)領域を 統計的に推定するためのスクリプト。 出力: - results_bitmap/entropy_positions.json - results_bitmap/low_entropy_map.csv - 統計的に出現率が高い「低エントロピー開始位置」 """ import os import math import json from pathlib import Path from collections import defaultdict, Counter from concurrent.futures import ProcessPoolExecutor, as_completed WINDOW = 2048 # 一度に読むバイト数(非圧縮画像だと数KB〜数十KB) STEP = 512 # どれだけずらして読むか LOW_ENTROPY = 6.5 # これ以下なら「非圧縮かもしれない」とみなす OUT_DIR = Path("results_bitmap") OUT_DIR.mkdir(exist_ok=True) def entropy(data: bytes): if not data: return 0.0 freq = [0]*256 for b in data: freq[b] += 1 ent = 0.0 total = len(data) for c in freq: if c == 0: continue p = c / total ent -= p * math.log2(p) return ent def process_file(path: Path): try: with path.open('rb') as f: data = f.read() size = len(data) low_positions = [] # スライディングエントロピー for off in range(0, size-WINDOW, STEP): chunk = data[off:off+WINDOW] ent = entropy(chunk) if ent < LOW_ENTROPY: low_positions.append(off) return { "file": str(path), "size": size, "low_positions": low_positions } except Exception as e: return {"file": str(path), "error": str(e)} def main(): p = Path(".") files = list(p.rglob("*.jif")) + list(p.rglob("*.JIF")) print(f"Found {len(files)} files") results = [] pos_counter = Counter() with ProcessPoolExecutor(max_workers=4) as ex: futs = {ex.submit(process_file, f): f for f in files} for fut in as_completed(futs): r = fut.result() results.append(r) if "low_positions" in r: for pos in r["low_positions"]: pos_counter[pos] += 1 # 保存 (OUT_DIR / "entropy_positions.json").write_text(json.dumps(results, indent=2)) # CSV 保存 with (OUT_DIR / "low_entropy_map.csv").open("w", encoding="utf-8") as fo: fo.write("offset,count\n") for off, c in pos_counter.most_common(): fo.write(f"{off},{c}\n") print("Top low-entropy offsets (possible bitmap start):") for off, c in pos_counter.most_common(20): print(f" offset {off}: {c} files") if __name__ == "__main__": main()
教師A
2025-11-24 17:17:58
#!/usr/bin/env python3 """ compress_analyze.py JIFファイルの中から圧縮方式を推定するための統計解析スクリプト。 主な機能: - スライディングウィンドウエントロピー - 既知圧縮シグネチャの検出 - バイト頻度解析 - 全ファイルでの圧縮ブロックのオフセット統計 """ import os import math import json from pathlib import Path from collections import Counter, defaultdict from concurrent.futures import ProcessPoolExecutor, as_completed WINDOW = 2048 STEP = 256 HIGH_ENT = 7.5 # 圧縮ブロックとして扱う閾値 LOW_ENT = 5.5 # 非圧縮 or ビットマップと思われる閾値 OUT = Path("results_compress") OUT.mkdir(exist_ok=True) # known compression signatures COMP_SIG = { "ZLIB_78_01": bytes.fromhex("7801"), "ZLIB_78_9C": bytes.fromhex("789C"), "ZLIB_78_DA": bytes.fromhex("78DA"), "LZMA": bytes.fromhex("5D00008000"), "LZFSE": bytes.fromhex("62767832"), "LZ4_FRAME": bytes.fromhex("04224D18"), "JPEG": bytes.fromhex("FFD8FF"), "PNG": bytes.fromhex("89504E470D0A1A0A"), "JBIG2": bytes.fromhex("974A4232"), # CCITT G4 は明確なシグネチャが無いため後述の統計で判定 } def entropy(data: bytes): if not data: return 0.0 freq = [0]*256 for b in data: freq[b] += 1 ent = 0.0 total = len(data) for c in freq: if c == 0: continue p = c / total ent -= p * math.log2(p) return ent def find_signature(data: bytes): found = [] for name, sig in COMP_SIG.items(): idx = data.find(sig) if idx != -1: found.append((name, idx)) return found def process_file(path: Path): try: with path.open("rb") as f: data = f.read() size = len(data) # signature scan sigs = find_signature(data) # sliding entropy high_positions = [] low_positions = [] for off in range(0, size-WINDOW, STEP): ent = entropy(data[off:off+WINDOW]) if ent > HIGH_ENT: high_positions.append(off) elif ent < LOW_ENT: low_positions.append(off) return { "file": str(path), "size": size, "signatures": sigs, "high_entropy": high_positions, "low_entropy": low_positions } except Exception as e: return {"file": str(path), "error": str(e)} def main(): files = list(Path(".").rglob("*.JIF")) + list(Path(".").rglob("*.jif")) print("Found", len(files), "files") results = [] sig_counter = Counter() high_pos_counter = Counter() low_pos_counter = Counter() with ProcessPoolExecutor(max_workers=4) as ex: futs = {ex.submit(process_file, f): f for f in files} for fut in as_completed(futs): r = fut.result() results.append(r) if "signatures" in r: for name, pos in r["signatures"]: sig_counter[name] += 1 if "high_entropy" in r: for pos in r["high_entropy"]: high_pos_counter[pos] += 1 if "low_entropy" in r: for pos in r["low_entropy"]: low_pos_counter[pos] += 1 # save (OUT/"results.json").write_text(json.dumps(results, indent=2)) (OUT/"signature_counts.json").write_text(json.dumps(sig_counter, indent=2)) with (OUT/"high_entropy_map.csv").open("w") as fo: fo.write("offset,count\n") for off,c in high_pos_counter.most_common(): fo.write(f"{off},{c}\n") with (OUT/"low_entropy_map.csv").open("w") as fo: fo.write("offset,count\n") for off,c in low_pos_counter.most_common(): fo.write(f"{off},{c}\n") print("=== SIGNATURE COUNTS ===") print(sig_counter) print("Result saved in", OUT) if __name__ == "__main__": main()
教師A
2025-11-24 23:20:27
300(10進) = 0x12C(16進)
400(10進) = 0x190(16進)
600(10進) = 0x258(16進)
教師A
2025-12-15 13:06:13
うまく間を補填する方法について
def decide_base_numbers(points, mode="first"): """ mode: "first" or "last" return: {(A,B): N} """ base_map = {} if mode == "first": for A, B, C, D, N in points: base_map.setdefault((A, B), N) elif mode == "last": for A, B, C, D, N in points: base_map[(A, B)] = N else: raise ValueError("mode must be 'first' or 'last'") return base_map def build_base_map(points, final_end, mode="first"): """ return {(A, B): N} """ # (A,B,C,D,N) を (A,B,C,D,N,intB) へ records = [] for A, B, C, D, N in points: records.append((A, int(B), C, D, N)) records.sort() base_map = {} last_A = None last_N = None for i, (A, B, C, D, N) in enumerate(records): if A != last_A: last_A = A last_N = N base_map[(A, f"{B:06d}")] = N continue prev_B = records[i - 1][1] # 連続していない区画2を補完 for bb in range(prev_B + 1, B): base_map[(A, f"{bb:06d}")] = last_N # 明示的な B if mode == "first": base_map.setdefault((A, f"{B:06d}"), last_N) else: # last base_map[(A, f"{B:06d}")] = N last_N = N # 最後の B 〜 final_end まで補完 if records: A, B, *_ = records[-1] end_B = int(final_end[1]) for bb in range(B + 1, end_B): base_map[(A, f"{bb:06d}")] = last_N return base_map def generate_records(start, end, base_map): wa, wb, wc, wd, wn = map(len, start) def z(n, w): return str(n).zfill(w) def to_int(t): return [int(x) for x in t] cur = to_int(start) endv = to_int(end) MAX_C = 10**wc - 1 MAX_D = 10**wd - 1 while tuple(cur[:4]) < tuple(endv[:4]): A, B, C, D, N = cur base_n = base_map[(z(A, wa), z(B, wb))] # EX 判定 start_d = D if cur[:3] == endv[:3]: end_d = endv[3] - 1 else: end_d = MAX_D # BASE 管理区画番号と同じ → 出さない if z(N, wn) != base_n: if start_d == 0 and end_d == MAX_D: yield ("EX", z(A, wa), z(B, wb), z(C, wc), "*****", z(N, wn)) else: for d in range(start_d, end_d + 1): yield ("EX", z(A, wa), z(B, wb), z(C, wc), z(d, wd), z(N, wn)) # 次へ cur[3] = 0 cur[2] += 1 if cur[2] > MAX_C: cur[2] = 0 cur[1] += 1 def points_to_ranges(points, final_end): for i in range(len(points) - 1): yield points[i], points[i + 1] yield points[-1], final_end def generate_all(points, final_end, base_mode="first"): # BASE 管理区画番号決定 base_map = build_base_map(points, final_end=final_end, mode=base_mode) # BASE 出力 for (A, B), N in base_map.items(): yield ("BASE", A, B, N) print("星", base_map) # EX 出力 for start, end in points_to_ranges(points, final_end): yield from generate_records(start, end, base_map) base_records = [] exception_records = [] points = [ ("23236", "010008", "99999", "99995", "001"), ("23236", "010009", "00000", "00003", "002"), ("23236", "010010", "00000", "00000", "003"), ("23236", "010010", "00000", "00005", "004"), ] final_end = ("23236", "010011", "00000", "00000", "999") if "__main__" == __name__: for rec in generate_all(points, final_end, base_mode="last"): if rec[0] == "BASE": base_records.append(rec[1:]) else: exception_records.append(rec[1:]) print("○ 入力") for k in points + [final_end]: print(k) print() print("● 出力 応用") for b in exception_records: print(b) print() print("● 出力 基本") for b in base_records: print(b) print()
教師A
2026-02-08 18:27:28
構成日時:2026-04-20 00:17:12
現在ページ番号:1
最大ページ数:1
最古メッセージ日時:2024-10-22 20:43:19
最新メッセージ日時:2026-02-08 18:27:28
メインスレッド数:53
サブスレッド数:13
推定ページサイズ:399.098KiB